Nov 25 14:53:58 crc systemd[1]: Starting Kubernetes Kubelet... Nov 25 14:53:59 crc restorecon[4688]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:53:59 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:54:00 crc restorecon[4688]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Nov 25 14:54:00 crc restorecon[4688]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Nov 25 14:54:02 crc kubenswrapper[4731]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 14:54:02 crc kubenswrapper[4731]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Nov 25 14:54:02 crc kubenswrapper[4731]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 14:54:02 crc kubenswrapper[4731]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 14:54:02 crc kubenswrapper[4731]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 25 14:54:02 crc kubenswrapper[4731]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.479682 4731 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486288 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486320 4731 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486329 4731 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486338 4731 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486347 4731 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486360 4731 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486371 4731 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486380 4731 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486388 4731 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486396 4731 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486404 4731 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486412 4731 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486420 4731 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486428 4731 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486440 4731 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486448 4731 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486456 4731 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486463 4731 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486471 4731 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486479 4731 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486487 4731 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486495 4731 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486503 4731 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486510 4731 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486518 4731 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486526 4731 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486534 4731 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486542 4731 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486549 4731 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486560 4731 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486571 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486582 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486590 4731 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486598 4731 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486606 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486616 4731 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486625 4731 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486634 4731 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486644 4731 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486652 4731 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486661 4731 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486669 4731 feature_gate.go:330] unrecognized feature gate: Example Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486677 4731 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486685 4731 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486692 4731 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486700 4731 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486712 4731 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486722 4731 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486731 4731 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486740 4731 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486749 4731 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486758 4731 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486766 4731 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486774 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486782 4731 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486790 4731 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486798 4731 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486806 4731 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486842 4731 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486851 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486858 4731 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486866 4731 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486873 4731 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486883 4731 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486891 4731 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486899 4731 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486906 4731 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486916 4731 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486926 4731 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486936 4731 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.486945 4731 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487138 4731 flags.go:64] FLAG: --address="0.0.0.0" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487162 4731 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487180 4731 flags.go:64] FLAG: --anonymous-auth="true" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487194 4731 flags.go:64] FLAG: --application-metrics-count-limit="100" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487209 4731 flags.go:64] FLAG: --authentication-token-webhook="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487220 4731 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487235 4731 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487247 4731 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487257 4731 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487267 4731 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487279 4731 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487290 4731 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487300 4731 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487310 4731 flags.go:64] FLAG: --cgroup-root="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487322 4731 flags.go:64] FLAG: --cgroups-per-qos="true" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487333 4731 flags.go:64] FLAG: --client-ca-file="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487344 4731 flags.go:64] FLAG: --cloud-config="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487355 4731 flags.go:64] FLAG: --cloud-provider="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487366 4731 flags.go:64] FLAG: --cluster-dns="[]" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487384 4731 flags.go:64] FLAG: --cluster-domain="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487394 4731 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487404 4731 flags.go:64] FLAG: --config-dir="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487414 4731 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487424 4731 flags.go:64] FLAG: --container-log-max-files="5" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487436 4731 flags.go:64] FLAG: --container-log-max-size="10Mi" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487444 4731 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487453 4731 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487463 4731 flags.go:64] FLAG: --containerd-namespace="k8s.io" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487473 4731 flags.go:64] FLAG: --contention-profiling="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487482 4731 flags.go:64] FLAG: --cpu-cfs-quota="true" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487490 4731 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487500 4731 flags.go:64] FLAG: --cpu-manager-policy="none" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487511 4731 flags.go:64] FLAG: --cpu-manager-policy-options="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487522 4731 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487531 4731 flags.go:64] FLAG: --enable-controller-attach-detach="true" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487540 4731 flags.go:64] FLAG: --enable-debugging-handlers="true" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487549 4731 flags.go:64] FLAG: --enable-load-reader="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487558 4731 flags.go:64] FLAG: --enable-server="true" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487567 4731 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487578 4731 flags.go:64] FLAG: --event-burst="100" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487588 4731 flags.go:64] FLAG: --event-qps="50" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487597 4731 flags.go:64] FLAG: --event-storage-age-limit="default=0" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487606 4731 flags.go:64] FLAG: --event-storage-event-limit="default=0" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487616 4731 flags.go:64] FLAG: --eviction-hard="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487627 4731 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487636 4731 flags.go:64] FLAG: --eviction-minimum-reclaim="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487645 4731 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487655 4731 flags.go:64] FLAG: --eviction-soft="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487664 4731 flags.go:64] FLAG: --eviction-soft-grace-period="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487674 4731 flags.go:64] FLAG: --exit-on-lock-contention="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487684 4731 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487694 4731 flags.go:64] FLAG: --experimental-mounter-path="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487704 4731 flags.go:64] FLAG: --fail-cgroupv1="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487714 4731 flags.go:64] FLAG: --fail-swap-on="true" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487723 4731 flags.go:64] FLAG: --feature-gates="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487735 4731 flags.go:64] FLAG: --file-check-frequency="20s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487745 4731 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487754 4731 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487764 4731 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487773 4731 flags.go:64] FLAG: --healthz-port="10248" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487784 4731 flags.go:64] FLAG: --help="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487793 4731 flags.go:64] FLAG: --hostname-override="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487802 4731 flags.go:64] FLAG: --housekeeping-interval="10s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487810 4731 flags.go:64] FLAG: --http-check-frequency="20s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487857 4731 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487870 4731 flags.go:64] FLAG: --image-credential-provider-config="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487881 4731 flags.go:64] FLAG: --image-gc-high-threshold="85" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487892 4731 flags.go:64] FLAG: --image-gc-low-threshold="80" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487904 4731 flags.go:64] FLAG: --image-service-endpoint="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487913 4731 flags.go:64] FLAG: --kernel-memcg-notification="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487922 4731 flags.go:64] FLAG: --kube-api-burst="100" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487931 4731 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487941 4731 flags.go:64] FLAG: --kube-api-qps="50" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487950 4731 flags.go:64] FLAG: --kube-reserved="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487959 4731 flags.go:64] FLAG: --kube-reserved-cgroup="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487968 4731 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487978 4731 flags.go:64] FLAG: --kubelet-cgroups="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487986 4731 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.487996 4731 flags.go:64] FLAG: --lock-file="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488005 4731 flags.go:64] FLAG: --log-cadvisor-usage="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488014 4731 flags.go:64] FLAG: --log-flush-frequency="5s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488024 4731 flags.go:64] FLAG: --log-json-info-buffer-size="0" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488038 4731 flags.go:64] FLAG: --log-json-split-stream="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488047 4731 flags.go:64] FLAG: --log-text-info-buffer-size="0" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488058 4731 flags.go:64] FLAG: --log-text-split-stream="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488067 4731 flags.go:64] FLAG: --logging-format="text" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488077 4731 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488087 4731 flags.go:64] FLAG: --make-iptables-util-chains="true" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488096 4731 flags.go:64] FLAG: --manifest-url="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488105 4731 flags.go:64] FLAG: --manifest-url-header="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488117 4731 flags.go:64] FLAG: --max-housekeeping-interval="15s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488127 4731 flags.go:64] FLAG: --max-open-files="1000000" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488138 4731 flags.go:64] FLAG: --max-pods="110" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488147 4731 flags.go:64] FLAG: --maximum-dead-containers="-1" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488156 4731 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488166 4731 flags.go:64] FLAG: --memory-manager-policy="None" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488175 4731 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488184 4731 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488193 4731 flags.go:64] FLAG: --node-ip="192.168.126.11" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488203 4731 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488225 4731 flags.go:64] FLAG: --node-status-max-images="50" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488234 4731 flags.go:64] FLAG: --node-status-update-frequency="10s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488243 4731 flags.go:64] FLAG: --oom-score-adj="-999" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488253 4731 flags.go:64] FLAG: --pod-cidr="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488264 4731 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488277 4731 flags.go:64] FLAG: --pod-manifest-path="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488286 4731 flags.go:64] FLAG: --pod-max-pids="-1" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488295 4731 flags.go:64] FLAG: --pods-per-core="0" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488304 4731 flags.go:64] FLAG: --port="10250" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488313 4731 flags.go:64] FLAG: --protect-kernel-defaults="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488323 4731 flags.go:64] FLAG: --provider-id="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488332 4731 flags.go:64] FLAG: --qos-reserved="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488340 4731 flags.go:64] FLAG: --read-only-port="10255" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488349 4731 flags.go:64] FLAG: --register-node="true" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488358 4731 flags.go:64] FLAG: --register-schedulable="true" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488368 4731 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488383 4731 flags.go:64] FLAG: --registry-burst="10" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488392 4731 flags.go:64] FLAG: --registry-qps="5" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488401 4731 flags.go:64] FLAG: --reserved-cpus="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488410 4731 flags.go:64] FLAG: --reserved-memory="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488421 4731 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488430 4731 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488439 4731 flags.go:64] FLAG: --rotate-certificates="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488448 4731 flags.go:64] FLAG: --rotate-server-certificates="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488457 4731 flags.go:64] FLAG: --runonce="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488466 4731 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488476 4731 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488485 4731 flags.go:64] FLAG: --seccomp-default="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488494 4731 flags.go:64] FLAG: --serialize-image-pulls="true" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488503 4731 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488512 4731 flags.go:64] FLAG: --storage-driver-db="cadvisor" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488523 4731 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488533 4731 flags.go:64] FLAG: --storage-driver-password="root" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488542 4731 flags.go:64] FLAG: --storage-driver-secure="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488551 4731 flags.go:64] FLAG: --storage-driver-table="stats" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488560 4731 flags.go:64] FLAG: --storage-driver-user="root" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488570 4731 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488579 4731 flags.go:64] FLAG: --sync-frequency="1m0s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488588 4731 flags.go:64] FLAG: --system-cgroups="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488597 4731 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488612 4731 flags.go:64] FLAG: --system-reserved-cgroup="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488621 4731 flags.go:64] FLAG: --tls-cert-file="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488630 4731 flags.go:64] FLAG: --tls-cipher-suites="[]" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488642 4731 flags.go:64] FLAG: --tls-min-version="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488651 4731 flags.go:64] FLAG: --tls-private-key-file="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488659 4731 flags.go:64] FLAG: --topology-manager-policy="none" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488669 4731 flags.go:64] FLAG: --topology-manager-policy-options="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488678 4731 flags.go:64] FLAG: --topology-manager-scope="container" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488688 4731 flags.go:64] FLAG: --v="2" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488699 4731 flags.go:64] FLAG: --version="false" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488711 4731 flags.go:64] FLAG: --vmodule="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488722 4731 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.488733 4731 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.488968 4731 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.488980 4731 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.488991 4731 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489002 4731 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489011 4731 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489020 4731 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489029 4731 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489038 4731 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489046 4731 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489055 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489063 4731 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489073 4731 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489081 4731 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489088 4731 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489096 4731 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489104 4731 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489112 4731 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489120 4731 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489128 4731 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489136 4731 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489144 4731 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489152 4731 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489160 4731 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489171 4731 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489181 4731 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489190 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489204 4731 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489214 4731 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489223 4731 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489231 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489239 4731 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489248 4731 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489256 4731 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489265 4731 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489273 4731 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489281 4731 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489290 4731 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489298 4731 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489306 4731 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489313 4731 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489322 4731 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489330 4731 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489338 4731 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489346 4731 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489354 4731 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489362 4731 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489370 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489377 4731 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489385 4731 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489393 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489401 4731 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489408 4731 feature_gate.go:330] unrecognized feature gate: Example Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489416 4731 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489424 4731 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489432 4731 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489440 4731 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489448 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489456 4731 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489469 4731 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489479 4731 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489488 4731 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489498 4731 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489508 4731 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489517 4731 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489526 4731 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489535 4731 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489543 4731 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489553 4731 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489561 4731 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489569 4731 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.489577 4731 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.489591 4731 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.502006 4731 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.502032 4731 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502122 4731 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502130 4731 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502137 4731 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502145 4731 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502150 4731 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502155 4731 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502159 4731 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502164 4731 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502168 4731 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502172 4731 feature_gate.go:330] unrecognized feature gate: Example Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502176 4731 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502181 4731 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502185 4731 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502190 4731 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502194 4731 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502199 4731 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502203 4731 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502208 4731 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502213 4731 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502217 4731 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502221 4731 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502226 4731 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502230 4731 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502235 4731 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502240 4731 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502245 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502249 4731 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502256 4731 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502263 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502272 4731 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502280 4731 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502289 4731 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502296 4731 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502302 4731 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502308 4731 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502314 4731 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502319 4731 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502324 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502330 4731 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502336 4731 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502340 4731 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502346 4731 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502351 4731 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502357 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502363 4731 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502368 4731 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502373 4731 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502377 4731 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502382 4731 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502388 4731 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502394 4731 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502398 4731 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502403 4731 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502409 4731 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502415 4731 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502420 4731 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502426 4731 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502432 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502436 4731 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502442 4731 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502448 4731 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502453 4731 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502457 4731 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502461 4731 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502465 4731 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502469 4731 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502473 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502477 4731 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502481 4731 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502484 4731 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502488 4731 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.502496 4731 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502616 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502624 4731 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502628 4731 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502634 4731 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502637 4731 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502641 4731 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502645 4731 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502649 4731 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502653 4731 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502657 4731 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502660 4731 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502664 4731 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502668 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502672 4731 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502676 4731 feature_gate.go:330] unrecognized feature gate: GatewayAPI Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502679 4731 feature_gate.go:330] unrecognized feature gate: PlatformOperators Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502683 4731 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502686 4731 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502690 4731 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502694 4731 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502697 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502701 4731 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502707 4731 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502711 4731 feature_gate.go:330] unrecognized feature gate: PinnedImages Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502716 4731 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502720 4731 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502725 4731 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502730 4731 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502734 4731 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502738 4731 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502742 4731 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502747 4731 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502750 4731 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502756 4731 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502760 4731 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502764 4731 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502768 4731 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502773 4731 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502777 4731 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502781 4731 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502785 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502789 4731 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502793 4731 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502796 4731 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502800 4731 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502804 4731 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502807 4731 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502811 4731 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502834 4731 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502839 4731 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502844 4731 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502848 4731 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502852 4731 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502856 4731 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502860 4731 feature_gate.go:330] unrecognized feature gate: OVNObservability Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502864 4731 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502869 4731 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502874 4731 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502880 4731 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502884 4731 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502889 4731 feature_gate.go:330] unrecognized feature gate: InsightsConfig Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502894 4731 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502898 4731 feature_gate.go:330] unrecognized feature gate: Example Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502904 4731 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502908 4731 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502913 4731 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502918 4731 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.502923 4731 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.503149 4731 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.503154 4731 feature_gate.go:330] unrecognized feature gate: SignatureStores Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.503159 4731 feature_gate.go:330] unrecognized feature gate: NewOLM Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.503165 4731 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.506413 4731 server.go:940] "Client rotation is on, will bootstrap in background" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.514354 4731 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.514508 4731 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.517752 4731 server.go:997] "Starting client certificate rotation" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.517794 4731 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.517972 4731 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-20 09:59:19.598637944 +0000 UTC Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.518060 4731 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.556420 4731 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 14:54:02 crc kubenswrapper[4731]: E1125 14:54:02.559100 4731 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.562600 4731 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.585544 4731 log.go:25] "Validated CRI v1 runtime API" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.623460 4731 log.go:25] "Validated CRI v1 image API" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.625656 4731 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.631154 4731 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-11-25-14-48-32-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.631210 4731 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.659007 4731 manager.go:217] Machine: {Timestamp:2025-11-25 14:54:02.652188555 +0000 UTC m=+1.399323065 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:1f00ad82-feaa-4329-a6d9-a9af68b2a156 BootID:7c7fdaeb-c6a4-441c-872f-17222039636c Filesystems:[{Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:89:be:06 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:89:be:06 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:e5:5a:d2 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f7:b1:1d Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:c3:87:79 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:5f:40:bd Speed:-1 Mtu:1496} {Name:eth10 MacAddress:0e:89:a7:57:c7:6c Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:52:5d:01:fa:78:6b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.659496 4731 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.659936 4731 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.660610 4731 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.661120 4731 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.661192 4731 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.667897 4731 topology_manager.go:138] "Creating topology manager with none policy" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.667942 4731 container_manager_linux.go:303] "Creating device plugin manager" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.670711 4731 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.670771 4731 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.671235 4731 state_mem.go:36] "Initialized new in-memory state store" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.671417 4731 server.go:1245] "Using root directory" path="/var/lib/kubelet" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.685844 4731 kubelet.go:418] "Attempting to sync node with API server" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.685886 4731 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.685916 4731 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.685940 4731 kubelet.go:324] "Adding apiserver pod source" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.686003 4731 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.692934 4731 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.693009 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:02 crc kubenswrapper[4731]: E1125 14:54:02.693138 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.693116 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:02 crc kubenswrapper[4731]: E1125 14:54:02.693232 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.694369 4731 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.697427 4731 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.707470 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.707532 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.707550 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.707565 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.707587 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.707602 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.707616 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.707637 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.707653 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.707667 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.707704 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.707719 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.708724 4731 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.709626 4731 server.go:1280] "Started kubelet" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.710946 4731 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.711620 4731 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.711663 4731 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 25 14:54:02 crc systemd[1]: Started Kubernetes Kubelet. Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.712543 4731 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.713869 4731 server.go:460] "Adding debug handlers to kubelet server" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.718108 4731 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.718175 4731 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.718291 4731 volume_manager.go:287] "The desired_state_of_world populator starts" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.718324 4731 volume_manager.go:289] "Starting Kubelet Volume Manager" Nov 25 14:54:02 crc kubenswrapper[4731]: E1125 14:54:02.718343 4731 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.718297 4731 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 03:45:36.815755661 +0000 UTC Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.718410 4731 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1236h51m34.097353456s for next certificate rotation Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.718443 4731 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 25 14:54:02 crc kubenswrapper[4731]: E1125 14:54:02.732046 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.729771 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:02 crc kubenswrapper[4731]: E1125 14:54:02.732996 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.733651 4731 factory.go:55] Registering systemd factory Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.733676 4731 factory.go:221] Registration of the systemd container factory successfully Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.734135 4731 factory.go:153] Registering CRI-O factory Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.734170 4731 factory.go:221] Registration of the crio container factory successfully Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.734271 4731 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.734319 4731 factory.go:103] Registering Raw factory Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.734348 4731 manager.go:1196] Started watching for new ooms in manager Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.735631 4731 manager.go:319] Starting recovery of all containers Nov 25 14:54:02 crc kubenswrapper[4731]: E1125 14:54:02.732705 4731 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b47a298350ae8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 14:54:02.709568232 +0000 UTC m=+1.456702702,LastTimestamp:2025-11-25 14:54:02.709568232 +0000 UTC m=+1.456702702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751266 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751339 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751360 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751380 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751397 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751412 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751428 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751442 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751460 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751476 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751491 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751509 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751526 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751549 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751566 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751583 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751600 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751614 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751638 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751654 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751669 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751686 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751714 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751729 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751743 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751756 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751775 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751794 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751859 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751894 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751909 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751957 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751973 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.751986 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752000 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752012 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752027 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752046 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752062 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752077 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752093 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752107 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752120 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752133 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752146 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752160 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752176 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752193 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752206 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752218 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752233 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752248 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752267 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752282 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752297 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752313 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752331 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752345 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752359 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752374 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752389 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752402 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752416 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752434 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752447 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752468 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752481 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752493 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752507 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752521 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752544 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752556 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752570 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752585 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752598 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752612 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752624 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752637 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752651 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752666 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752678 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752693 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752708 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752721 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752734 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752749 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752764 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752778 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752792 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752804 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752837 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752850 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752865 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752879 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752893 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752907 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752921 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752938 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.752952 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753108 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753124 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753139 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753153 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753168 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753188 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753204 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753219 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753236 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753250 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753264 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753280 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753295 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753309 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753325 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753338 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753353 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753368 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753383 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753396 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753410 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753423 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753436 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753449 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753462 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753474 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753487 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753500 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753513 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753526 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753538 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753552 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753565 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753579 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753592 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753605 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753621 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753632 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753645 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753657 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753669 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753682 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753697 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753709 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753721 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753734 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753747 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753759 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753773 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753785 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753799 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753831 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753846 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753860 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753872 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753886 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753898 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753912 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753924 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753937 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753949 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753960 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753973 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753985 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.753996 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.754008 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.754020 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.754034 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.754047 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.754060 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.754072 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.754085 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.754096 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.754111 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.754126 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.754139 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.754152 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.754166 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761351 4731 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761426 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761449 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761467 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761481 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761503 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761520 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761533 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761549 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761565 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761580 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761596 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761610 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761624 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761641 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761658 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761675 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761690 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761704 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761715 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761728 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761738 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761751 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761763 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761777 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761788 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761799 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761808 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761837 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761851 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761863 4731 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761873 4731 reconstruct.go:97] "Volume reconstruction finished" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.761881 4731 reconciler.go:26] "Reconciler: start to sync state" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.765961 4731 manager.go:324] Recovery completed Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.775488 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.777438 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.777568 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.777644 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.778917 4731 cpu_manager.go:225] "Starting CPU manager" policy="none" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.778985 4731 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.779031 4731 state_mem.go:36] "Initialized new in-memory state store" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.806262 4731 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.810172 4731 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.810245 4731 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.810273 4731 kubelet.go:2335] "Starting kubelet main sync loop" Nov 25 14:54:02 crc kubenswrapper[4731]: E1125 14:54:02.810451 4731 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 25 14:54:02 crc kubenswrapper[4731]: E1125 14:54:02.818432 4731 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 14:54:02 crc kubenswrapper[4731]: W1125 14:54:02.818637 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:02 crc kubenswrapper[4731]: E1125 14:54:02.818796 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.837771 4731 policy_none.go:49] "None policy: Start" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.839156 4731 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.839258 4731 state_mem.go:35] "Initializing new in-memory state store" Nov 25 14:54:02 crc kubenswrapper[4731]: E1125 14:54:02.911587 4731 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 25 14:54:02 crc kubenswrapper[4731]: E1125 14:54:02.918765 4731 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Nov 25 14:54:02 crc kubenswrapper[4731]: E1125 14:54:02.934420 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.940123 4731 manager.go:334] "Starting Device Plugin manager" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.940261 4731 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.940278 4731 server.go:79] "Starting device plugin registration server" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.940720 4731 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.940748 4731 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.941248 4731 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.941442 4731 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Nov 25 14:54:02 crc kubenswrapper[4731]: I1125 14:54:02.941478 4731 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 25 14:54:02 crc kubenswrapper[4731]: E1125 14:54:02.948474 4731 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.041183 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.042263 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.042307 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.042320 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.042351 4731 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:54:03 crc kubenswrapper[4731]: E1125 14:54:03.043042 4731 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.111758 4731 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.111939 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.113184 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.113228 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.113240 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.113381 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.113940 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.114087 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.114248 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.114271 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.114279 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.114397 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.114600 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.114655 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.115026 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.115061 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.115073 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.115210 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.115341 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.115382 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.115515 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.115539 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.115551 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.116324 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.116355 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.116370 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.116690 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.116835 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.116946 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.116695 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.117124 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.117148 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.117449 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.117592 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.117641 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.119382 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.119411 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.119424 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.119594 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.119627 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.119665 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.119682 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.119690 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.120395 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.120420 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.120454 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.166263 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.166323 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.166355 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.166378 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.166403 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.166427 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.166451 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.166474 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.166496 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.166519 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.166555 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.166592 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.166616 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.166649 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.166674 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.243405 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.245426 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.245486 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.245497 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.245525 4731 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:54:03 crc kubenswrapper[4731]: E1125 14:54:03.246142 4731 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267410 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267478 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267499 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267535 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267552 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267567 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267581 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267597 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267613 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267612 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267626 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267639 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267656 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267670 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267685 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267700 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267698 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267726 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267767 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267795 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267842 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267880 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267903 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267925 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267946 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267978 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.267996 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.268027 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.268047 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.268082 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: E1125 14:54:03.335330 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.456868 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.464082 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.482211 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.502184 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.508954 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:54:03 crc kubenswrapper[4731]: W1125 14:54:03.560353 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-472d7fa30e9f85383e3f106f3c1d36501bc9b568befd66067f9325448030fff4 WatchSource:0}: Error finding container 472d7fa30e9f85383e3f106f3c1d36501bc9b568befd66067f9325448030fff4: Status 404 returned error can't find the container with id 472d7fa30e9f85383e3f106f3c1d36501bc9b568befd66067f9325448030fff4 Nov 25 14:54:03 crc kubenswrapper[4731]: W1125 14:54:03.561531 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-6b26d30caa2369f38c8c61bff71a1b466ec69ff1dfb4cedda0ac00fbf57d2385 WatchSource:0}: Error finding container 6b26d30caa2369f38c8c61bff71a1b466ec69ff1dfb4cedda0ac00fbf57d2385: Status 404 returned error can't find the container with id 6b26d30caa2369f38c8c61bff71a1b466ec69ff1dfb4cedda0ac00fbf57d2385 Nov 25 14:54:03 crc kubenswrapper[4731]: W1125 14:54:03.562218 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-bbd8881f74bdde4e2dfa7db52e452429bbf78bb34b0ad194d6589f336fab9e7d WatchSource:0}: Error finding container bbd8881f74bdde4e2dfa7db52e452429bbf78bb34b0ad194d6589f336fab9e7d: Status 404 returned error can't find the container with id bbd8881f74bdde4e2dfa7db52e452429bbf78bb34b0ad194d6589f336fab9e7d Nov 25 14:54:03 crc kubenswrapper[4731]: W1125 14:54:03.563722 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-c1296876066ad8d84d3526337f87d14d02c404f598474471a2647d30acb64240 WatchSource:0}: Error finding container c1296876066ad8d84d3526337f87d14d02c404f598474471a2647d30acb64240: Status 404 returned error can't find the container with id c1296876066ad8d84d3526337f87d14d02c404f598474471a2647d30acb64240 Nov 25 14:54:03 crc kubenswrapper[4731]: W1125 14:54:03.564105 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-d185b761facbcef5abe0e90cd569ecba48d69b9e94deaba8fe6930fc3da3fe72 WatchSource:0}: Error finding container d185b761facbcef5abe0e90cd569ecba48d69b9e94deaba8fe6930fc3da3fe72: Status 404 returned error can't find the container with id d185b761facbcef5abe0e90cd569ecba48d69b9e94deaba8fe6930fc3da3fe72 Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.646613 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.648740 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.648784 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.648793 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.648841 4731 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:54:03 crc kubenswrapper[4731]: E1125 14:54:03.649323 4731 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Nov 25 14:54:03 crc kubenswrapper[4731]: W1125 14:54:03.688137 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:03 crc kubenswrapper[4731]: E1125 14:54:03.688266 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.712908 4731 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.815684 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bbd8881f74bdde4e2dfa7db52e452429bbf78bb34b0ad194d6589f336fab9e7d"} Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.817070 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"472d7fa30e9f85383e3f106f3c1d36501bc9b568befd66067f9325448030fff4"} Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.817984 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"d185b761facbcef5abe0e90cd569ecba48d69b9e94deaba8fe6930fc3da3fe72"} Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.818864 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c1296876066ad8d84d3526337f87d14d02c404f598474471a2647d30acb64240"} Nov 25 14:54:03 crc kubenswrapper[4731]: I1125 14:54:03.820254 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6b26d30caa2369f38c8c61bff71a1b466ec69ff1dfb4cedda0ac00fbf57d2385"} Nov 25 14:54:03 crc kubenswrapper[4731]: W1125 14:54:03.839474 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:03 crc kubenswrapper[4731]: E1125 14:54:03.839592 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:54:03 crc kubenswrapper[4731]: W1125 14:54:03.867776 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:03 crc kubenswrapper[4731]: E1125 14:54:03.867970 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:54:03 crc kubenswrapper[4731]: W1125 14:54:03.893650 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:03 crc kubenswrapper[4731]: E1125 14:54:03.893767 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:54:04 crc kubenswrapper[4731]: E1125 14:54:04.137354 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Nov 25 14:54:04 crc kubenswrapper[4731]: I1125 14:54:04.449897 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:04 crc kubenswrapper[4731]: I1125 14:54:04.451003 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:04 crc kubenswrapper[4731]: I1125 14:54:04.451034 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:04 crc kubenswrapper[4731]: I1125 14:54:04.451044 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:04 crc kubenswrapper[4731]: I1125 14:54:04.451067 4731 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:54:04 crc kubenswrapper[4731]: E1125 14:54:04.451553 4731 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Nov 25 14:54:04 crc kubenswrapper[4731]: I1125 14:54:04.954099 4731 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 25 14:54:04 crc kubenswrapper[4731]: I1125 14:54:04.954395 4731 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:04 crc kubenswrapper[4731]: E1125 14:54:04.954958 4731 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.712012 4731 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:05 crc kubenswrapper[4731]: E1125 14:54:05.738553 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="3.2s" Nov 25 14:54:05 crc kubenswrapper[4731]: W1125 14:54:05.793449 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:05 crc kubenswrapper[4731]: E1125 14:54:05.793525 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.961200 4731 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b" exitCode=0 Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.961326 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b"} Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.961531 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.963329 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.963368 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.963378 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.963714 4731 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="3c00208608937498741782f5d8da22301bc73fa162a56b87e97492fb210cdb19" exitCode=0 Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.963807 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"3c00208608937498741782f5d8da22301bc73fa162a56b87e97492fb210cdb19"} Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.963843 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.965332 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.965382 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.965395 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.965589 4731 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7" exitCode=0 Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.965700 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.965939 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7"} Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.971804 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.971911 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.971945 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.974846 4731 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="710b7b753496ce77eee2b2f7bfd01eb64e39c698b8674398ce8dd2c357f2fcc6" exitCode=0 Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.975001 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.975003 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"710b7b753496ce77eee2b2f7bfd01eb64e39c698b8674398ce8dd2c357f2fcc6"} Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.976336 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.976594 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.976798 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.980419 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493"} Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.980497 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148"} Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.980520 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970"} Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.982193 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.983347 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.983385 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:05 crc kubenswrapper[4731]: I1125 14:54:05.983401 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.051695 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.054207 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.054259 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.054290 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.054327 4731 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:54:06 crc kubenswrapper[4731]: E1125 14:54:06.054881 4731 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.80:6443: connect: connection refused" node="crc" Nov 25 14:54:06 crc kubenswrapper[4731]: W1125 14:54:06.575523 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:06 crc kubenswrapper[4731]: E1125 14:54:06.575621 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.712518 4731 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:06 crc kubenswrapper[4731]: W1125 14:54:06.837887 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:06 crc kubenswrapper[4731]: E1125 14:54:06.838046 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.988394 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5"} Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.988466 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.990001 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.990056 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.990079 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.993682 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3cfda1237eb6fc68c6f44d30364cc22bef01e4b3c3cc12a954d107954b609c52"} Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.993775 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b846fdb2153d4bf8a88b6e2a9cdb2f531905b465447ca0f84de8256e87119034"} Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.993811 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d15ac6c20890546462a0b064f5768c553817a1a00c387aa426381b5b23f7e048"} Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.997523 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e1c02f6a900147229d88be040478ffbd81f6aac846bfd5c10174be1a6fbbb613"} Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.997596 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.998987 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.999058 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:06 crc kubenswrapper[4731]: I1125 14:54:06.999077 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:07 crc kubenswrapper[4731]: I1125 14:54:07.002208 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5"} Nov 25 14:54:07 crc kubenswrapper[4731]: I1125 14:54:07.002277 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75"} Nov 25 14:54:07 crc kubenswrapper[4731]: I1125 14:54:07.005154 4731 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c80d04996b1f8ec5b41a261cbd2a477bde7de5a76c6388944c786299f2eaae93" exitCode=0 Nov 25 14:54:07 crc kubenswrapper[4731]: I1125 14:54:07.005220 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c80d04996b1f8ec5b41a261cbd2a477bde7de5a76c6388944c786299f2eaae93"} Nov 25 14:54:07 crc kubenswrapper[4731]: W1125 14:54:07.013488 4731 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:07 crc kubenswrapper[4731]: E1125 14:54:07.013599 4731 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.80:6443: connect: connection refused" logger="UnhandledError" Nov 25 14:54:07 crc kubenswrapper[4731]: E1125 14:54:07.251859 4731 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187b47a298350ae8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 14:54:02.709568232 +0000 UTC m=+1.456702702,LastTimestamp:2025-11-25 14:54:02.709568232 +0000 UTC m=+1.456702702,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 14:54:07 crc kubenswrapper[4731]: I1125 14:54:07.712413 4731 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:07 crc kubenswrapper[4731]: I1125 14:54:07.962963 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.012088 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.012154 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.012189 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb"} Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.012245 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c75ca277897bac2cc7773537dcf3f9d31cd14ad943e20b4ef62c84608422f906"} Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.012259 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e"} Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.012257 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.012365 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.012168 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.013581 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.013595 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.013636 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.013610 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.013650 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.013679 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.014578 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.014689 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.014763 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.014845 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.014871 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.014882 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.014785 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.015159 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.015177 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.151808 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.712097 4731 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.80:6443: connect: connection refused Nov 25 14:54:08 crc kubenswrapper[4731]: I1125 14:54:08.736300 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:08 crc kubenswrapper[4731]: E1125 14:54:08.940375 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="6.4s" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.016638 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.018361 4731 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c75ca277897bac2cc7773537dcf3f9d31cd14ad943e20b4ef62c84608422f906" exitCode=255 Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.018468 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.018485 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"c75ca277897bac2cc7773537dcf3f9d31cd14ad943e20b4ef62c84608422f906"} Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.019274 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.019308 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.019318 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.019865 4731 scope.go:117] "RemoveContainer" containerID="c75ca277897bac2cc7773537dcf3f9d31cd14ad943e20b4ef62c84608422f906" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.021441 4731 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0f0947845befe4d07072cbe20e8c78fd51e25ad307aff74552e1f1eb53d842fc" exitCode=0 Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.021507 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0f0947845befe4d07072cbe20e8c78fd51e25ad307aff74552e1f1eb53d842fc"} Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.021648 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.021706 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.022711 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.022734 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.022744 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.022898 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.022955 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.022969 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.242084 4731 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.256027 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.257786 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.257916 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.257936 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:09 crc kubenswrapper[4731]: I1125 14:54:09.257966 4731 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:54:10 crc kubenswrapper[4731]: I1125 14:54:10.026147 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 14:54:10 crc kubenswrapper[4731]: I1125 14:54:10.028029 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe"} Nov 25 14:54:10 crc kubenswrapper[4731]: I1125 14:54:10.028201 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:10 crc kubenswrapper[4731]: I1125 14:54:10.029567 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:10 crc kubenswrapper[4731]: I1125 14:54:10.029626 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:10 crc kubenswrapper[4731]: I1125 14:54:10.029645 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:10 crc kubenswrapper[4731]: I1125 14:54:10.033394 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5368fa0371c00168c1b9b863d630b76f0db01056843f0ea4f29bac38e750577a"} Nov 25 14:54:10 crc kubenswrapper[4731]: I1125 14:54:10.033457 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"595ca311cab164e4f036721f9d91c16837042f77ce78f858020c2cbc5bc5a9fa"} Nov 25 14:54:10 crc kubenswrapper[4731]: I1125 14:54:10.963178 4731 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 14:54:10 crc kubenswrapper[4731]: I1125 14:54:10.963287 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 14:54:11 crc kubenswrapper[4731]: I1125 14:54:11.005455 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:11 crc kubenswrapper[4731]: I1125 14:54:11.041508 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0a12dbc3c71d2db1d01d1fc3795bb579c9b6684d80795645370b67e2716a4c44"} Nov 25 14:54:11 crc kubenswrapper[4731]: I1125 14:54:11.041577 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fa3b4b17390093143d8349cb9c4223e90368eaafaebd0093147b02ee4811a825"} Nov 25 14:54:11 crc kubenswrapper[4731]: I1125 14:54:11.041591 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a3d7a8745b89386e55eb513c44cee3d0c23e71fcc91be84b9fbe06d9c09bdc48"} Nov 25 14:54:11 crc kubenswrapper[4731]: I1125 14:54:11.041610 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:11 crc kubenswrapper[4731]: I1125 14:54:11.041676 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:11 crc kubenswrapper[4731]: I1125 14:54:11.041713 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:11 crc kubenswrapper[4731]: I1125 14:54:11.042585 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:11 crc kubenswrapper[4731]: I1125 14:54:11.042623 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:11 crc kubenswrapper[4731]: I1125 14:54:11.042642 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:11 crc kubenswrapper[4731]: I1125 14:54:11.043006 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:11 crc kubenswrapper[4731]: I1125 14:54:11.043040 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:11 crc kubenswrapper[4731]: I1125 14:54:11.043054 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:12 crc kubenswrapper[4731]: I1125 14:54:12.044232 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:12 crc kubenswrapper[4731]: I1125 14:54:12.044232 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:12 crc kubenswrapper[4731]: I1125 14:54:12.047194 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:12 crc kubenswrapper[4731]: I1125 14:54:12.047246 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:12 crc kubenswrapper[4731]: I1125 14:54:12.047637 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:12 crc kubenswrapper[4731]: I1125 14:54:12.047656 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:12 crc kubenswrapper[4731]: I1125 14:54:12.047663 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:12 crc kubenswrapper[4731]: I1125 14:54:12.047638 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:12 crc kubenswrapper[4731]: I1125 14:54:12.312396 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:54:12 crc kubenswrapper[4731]: I1125 14:54:12.312615 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:12 crc kubenswrapper[4731]: I1125 14:54:12.313788 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:12 crc kubenswrapper[4731]: I1125 14:54:12.313837 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:12 crc kubenswrapper[4731]: I1125 14:54:12.313848 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:12 crc kubenswrapper[4731]: E1125 14:54:12.948586 4731 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 14:54:14 crc kubenswrapper[4731]: I1125 14:54:14.954568 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:54:14 crc kubenswrapper[4731]: I1125 14:54:14.954647 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Nov 25 14:54:14 crc kubenswrapper[4731]: I1125 14:54:14.954782 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:14 crc kubenswrapper[4731]: I1125 14:54:14.954860 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:14 crc kubenswrapper[4731]: I1125 14:54:14.956202 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:14 crc kubenswrapper[4731]: I1125 14:54:14.956233 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:14 crc kubenswrapper[4731]: I1125 14:54:14.956242 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:14 crc kubenswrapper[4731]: I1125 14:54:14.956398 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:14 crc kubenswrapper[4731]: I1125 14:54:14.956435 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:14 crc kubenswrapper[4731]: I1125 14:54:14.956447 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:15 crc kubenswrapper[4731]: I1125 14:54:15.059956 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:54:15 crc kubenswrapper[4731]: I1125 14:54:15.060210 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:15 crc kubenswrapper[4731]: I1125 14:54:15.061262 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:15 crc kubenswrapper[4731]: I1125 14:54:15.061299 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:15 crc kubenswrapper[4731]: I1125 14:54:15.061312 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:16 crc kubenswrapper[4731]: I1125 14:54:16.605540 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:54:16 crc kubenswrapper[4731]: I1125 14:54:16.605862 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:16 crc kubenswrapper[4731]: I1125 14:54:16.607498 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:16 crc kubenswrapper[4731]: I1125 14:54:16.607556 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:16 crc kubenswrapper[4731]: I1125 14:54:16.607569 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:16 crc kubenswrapper[4731]: I1125 14:54:16.611857 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:54:17 crc kubenswrapper[4731]: I1125 14:54:17.055903 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:17 crc kubenswrapper[4731]: I1125 14:54:17.060388 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:17 crc kubenswrapper[4731]: I1125 14:54:17.060496 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:17 crc kubenswrapper[4731]: I1125 14:54:17.060558 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:17 crc kubenswrapper[4731]: I1125 14:54:17.063029 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:54:18 crc kubenswrapper[4731]: I1125 14:54:18.058261 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:18 crc kubenswrapper[4731]: I1125 14:54:18.059474 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:18 crc kubenswrapper[4731]: I1125 14:54:18.059527 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:18 crc kubenswrapper[4731]: I1125 14:54:18.059540 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:19 crc kubenswrapper[4731]: E1125 14:54:19.244138 4731 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Nov 25 14:54:19 crc kubenswrapper[4731]: E1125 14:54:19.259087 4731 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Nov 25 14:54:19 crc kubenswrapper[4731]: I1125 14:54:19.717694 4731 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 14:54:19 crc kubenswrapper[4731]: I1125 14:54:19.717806 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 14:54:19 crc kubenswrapper[4731]: I1125 14:54:19.725436 4731 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Nov 25 14:54:19 crc kubenswrapper[4731]: I1125 14:54:19.725507 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Nov 25 14:54:20 crc kubenswrapper[4731]: I1125 14:54:20.952893 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Nov 25 14:54:20 crc kubenswrapper[4731]: I1125 14:54:20.953319 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:20 crc kubenswrapper[4731]: I1125 14:54:20.955170 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:20 crc kubenswrapper[4731]: I1125 14:54:20.955263 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:20 crc kubenswrapper[4731]: I1125 14:54:20.955369 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:20 crc kubenswrapper[4731]: I1125 14:54:20.963609 4731 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Nov 25 14:54:20 crc kubenswrapper[4731]: I1125 14:54:20.963736 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 14:54:20 crc kubenswrapper[4731]: I1125 14:54:20.990697 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Nov 25 14:54:21 crc kubenswrapper[4731]: I1125 14:54:21.010083 4731 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]log ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]etcd ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/openshift.io-api-request-count-filter ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/openshift.io-startkubeinformers ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/generic-apiserver-start-informers ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/priority-and-fairness-config-consumer ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/priority-and-fairness-filter ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/start-apiextensions-informers ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/start-apiextensions-controllers ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/crd-informer-synced ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/start-system-namespaces-controller ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/start-cluster-authentication-info-controller ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/start-legacy-token-tracking-controller ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/start-service-ip-repair-controllers ok Nov 25 14:54:21 crc kubenswrapper[4731]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/priority-and-fairness-config-producer ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/bootstrap-controller ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/start-kube-aggregator-informers ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/apiservice-status-local-available-controller ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/apiservice-status-remote-available-controller ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/apiservice-registration-controller ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/apiservice-wait-for-first-sync ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/apiservice-discovery-controller ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/kube-apiserver-autoregistration ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]autoregister-completion ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/apiservice-openapi-controller ok Nov 25 14:54:21 crc kubenswrapper[4731]: [+]poststarthook/apiservice-openapiv3-controller ok Nov 25 14:54:21 crc kubenswrapper[4731]: livez check failed Nov 25 14:54:21 crc kubenswrapper[4731]: I1125 14:54:21.010151 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:54:21 crc kubenswrapper[4731]: I1125 14:54:21.065812 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:21 crc kubenswrapper[4731]: I1125 14:54:21.068119 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:21 crc kubenswrapper[4731]: I1125 14:54:21.068250 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:21 crc kubenswrapper[4731]: I1125 14:54:21.068282 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:21 crc kubenswrapper[4731]: I1125 14:54:21.085721 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Nov 25 14:54:21 crc kubenswrapper[4731]: I1125 14:54:21.882062 4731 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 25 14:54:21 crc kubenswrapper[4731]: I1125 14:54:21.882187 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 25 14:54:22 crc kubenswrapper[4731]: I1125 14:54:22.069834 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:22 crc kubenswrapper[4731]: I1125 14:54:22.070852 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:22 crc kubenswrapper[4731]: I1125 14:54:22.070972 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:22 crc kubenswrapper[4731]: I1125 14:54:22.071062 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:22 crc kubenswrapper[4731]: E1125 14:54:22.948708 4731 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.722937 4731 trace.go:236] Trace[928184085]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 14:54:10.630) (total time: 14091ms): Nov 25 14:54:24 crc kubenswrapper[4731]: Trace[928184085]: ---"Objects listed" error: 14091ms (14:54:24.722) Nov 25 14:54:24 crc kubenswrapper[4731]: Trace[928184085]: [14.091932113s] [14.091932113s] END Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.723764 4731 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.728790 4731 trace.go:236] Trace[1859194271]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 14:54:12.559) (total time: 12169ms): Nov 25 14:54:24 crc kubenswrapper[4731]: Trace[1859194271]: ---"Objects listed" error: 12169ms (14:54:24.728) Nov 25 14:54:24 crc kubenswrapper[4731]: Trace[1859194271]: [12.169549587s] [12.169549587s] END Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.728840 4731 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.731280 4731 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.733653 4731 trace.go:236] Trace[309274337]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 14:54:12.927) (total time: 11805ms): Nov 25 14:54:24 crc kubenswrapper[4731]: Trace[309274337]: ---"Objects listed" error: 11805ms (14:54:24.733) Nov 25 14:54:24 crc kubenswrapper[4731]: Trace[309274337]: [11.805816402s] [11.805816402s] END Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.733701 4731 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.735552 4731 trace.go:236] Trace[234505573]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (25-Nov-2025 14:54:12.083) (total time: 12651ms): Nov 25 14:54:24 crc kubenswrapper[4731]: Trace[234505573]: ---"Objects listed" error: 12651ms (14:54:24.735) Nov 25 14:54:24 crc kubenswrapper[4731]: Trace[234505573]: [12.651978395s] [12.651978395s] END Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.735597 4731 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.967701 4731 apiserver.go:52] "Watching apiserver" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.970963 4731 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.971267 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.971754 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.971789 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:54:24 crc kubenswrapper[4731]: E1125 14:54:24.971847 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.972110 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.972162 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:24 crc kubenswrapper[4731]: E1125 14:54:24.972339 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.972655 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.972669 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:24 crc kubenswrapper[4731]: E1125 14:54:24.972945 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.974468 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.976761 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.976807 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.977058 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.977324 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.977464 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.977702 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.977734 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 14:54:24 crc kubenswrapper[4731]: I1125 14:54:24.978068 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.004139 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.020561 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.021163 4731 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.032864 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.032933 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.032968 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.032995 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033019 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033045 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033076 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033123 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033147 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033168 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033189 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033227 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033251 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033320 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033343 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033390 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033420 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033455 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033477 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033534 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033562 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033587 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033620 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033644 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033674 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033696 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033717 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033742 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033764 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033788 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033830 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033860 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033889 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034007 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034039 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034063 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034089 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034127 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034156 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033362 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034214 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033512 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033588 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033557 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033836 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.033920 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034058 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034087 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034168 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034184 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034359 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034372 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034461 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034482 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034497 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034531 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034558 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034570 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034583 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034580 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034610 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034640 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034667 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034696 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034725 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034749 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034782 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034801 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034882 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034809 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.034954 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.035033 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.035006 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.035078 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.035827 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.035879 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.035920 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.035995 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036013 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036098 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.035115 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036118 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036215 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036234 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036273 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036296 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036337 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036340 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036412 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036452 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036505 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036542 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036552 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036575 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036571 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036604 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036795 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036852 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.036953 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.037016 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.037026 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.037066 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.037221 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.037301 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.037498 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.037508 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.037553 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.037602 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.037603 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.037749 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.038019 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.038015 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.038086 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.038102 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.038269 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.038293 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.038288 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.038504 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.038554 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.038550 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.038627 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.038685 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.038758 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.038781 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040257 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040327 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040369 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040401 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040426 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040499 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040521 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040554 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040586 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040614 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040649 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040678 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040705 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040701 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040739 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040780 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040889 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040919 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.041140 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.041762 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.041801 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040718 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040843 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.041319 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.041364 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.041424 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.040981 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.041666 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.041685 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.041895 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:54:25.541862148 +0000 UTC m=+24.288996578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.042030 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.042136 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.042327 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043321 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043404 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043436 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043466 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043500 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043526 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043553 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043580 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043615 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043641 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043673 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043700 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043728 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043757 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043790 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043833 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043865 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043897 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043925 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043950 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.043980 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044013 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044047 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044078 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044113 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044147 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044183 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044259 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044295 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044324 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044487 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044529 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044598 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044639 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044671 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044708 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044736 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044767 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044800 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044848 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044880 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044913 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044941 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044965 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.044995 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.045334 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.045437 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.045483 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.045518 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.045561 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.045611 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.045606 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.045717 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.045915 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.045959 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046248 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046302 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046343 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046384 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046415 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046454 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046450 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046481 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046514 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046601 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046626 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046664 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046690 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046712 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046735 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046763 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046789 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046811 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046844 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046868 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046889 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046905 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046926 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046949 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046940 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.046967 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.047256 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.047301 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.047332 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.047366 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.047232 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.047282 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.047360 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.047709 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.047739 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.047743 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.047835 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.047533 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.047908 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.048118 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.048216 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.048468 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.048556 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.048663 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.049026 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.049418 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.049654 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.051062 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.051134 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.051246 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.051269 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.051293 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.051428 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.051448 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.051961 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.052056 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.052267 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.052374 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.052398 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.052590 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.052600 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.052733 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.052760 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.054377 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.054791 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.054830 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.054886 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.055211 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.055357 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.051766 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.055497 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.055675 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.055737 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.055779 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.055835 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.055850 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.055883 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.055929 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.055971 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.056010 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.056144 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058353 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058387 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058420 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058457 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058518 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058550 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058584 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058617 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058649 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058681 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058708 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058737 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058766 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058804 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058852 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058882 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058911 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058942 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058978 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059010 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059071 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059118 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059154 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059193 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059225 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059269 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059311 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059342 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059378 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059410 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.055885 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.055976 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.060583 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.056656 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.056706 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.056963 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.057044 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.057224 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.057302 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.051476 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058260 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058522 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058536 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058549 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058568 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058715 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058943 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.058977 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059023 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059002 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059130 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059136 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059137 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059380 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059504 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059581 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059697 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059708 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.059741 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.060164 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.060318 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.060599 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.060947 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.061181 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.061203 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.061698 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.061756 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.061762 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.061882 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.062220 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.062257 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.062421 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.062471 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.062589 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.063147 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.063430 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.063455 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.063509 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.063649 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.063789 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.063811 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.063834 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.064064 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.064109 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.064148 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.064302 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.064356 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.064398 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.064429 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.064446 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.064467 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.064529 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.064586 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.064608 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.064642 4731 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.065123 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.065538 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.065569 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.065018 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.065741 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.065834 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.065877 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:25.565244278 +0000 UTC m=+24.312378708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.065908 4731 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.065937 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.065996 4731 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066011 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066024 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066034 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066045 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066055 4731 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066065 4731 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066075 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066085 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066097 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066107 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066117 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066128 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066139 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066150 4731 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066160 4731 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066155 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.066226 4731 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066171 4731 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066263 4731 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.066269 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:25.566259174 +0000 UTC m=+24.313393604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066309 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066325 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066342 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066376 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066391 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066404 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066417 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066448 4731 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066461 4731 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066475 4731 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066487 4731 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066499 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066525 4731 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066537 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066547 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066558 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066568 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066580 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066609 4731 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066622 4731 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066635 4731 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066649 4731 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066660 4731 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066693 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066704 4731 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066714 4731 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066725 4731 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066725 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066737 4731 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066774 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066789 4731 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066801 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066829 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066840 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066852 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066864 4731 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066875 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066887 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066898 4731 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066888 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066911 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066967 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.066988 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067003 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067017 4731 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067030 4731 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067043 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067058 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067071 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067083 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067095 4731 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067109 4731 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067123 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067136 4731 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067148 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067161 4731 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067175 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067192 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067206 4731 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067219 4731 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067231 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067246 4731 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067258 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067271 4731 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067284 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067296 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067309 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067323 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067335 4731 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067349 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067362 4731 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067373 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067383 4731 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067393 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067402 4731 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067410 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067419 4731 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067428 4731 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067437 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067446 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067455 4731 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067464 4731 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067472 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067480 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067492 4731 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067500 4731 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067509 4731 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067518 4731 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067526 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067535 4731 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067543 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067553 4731 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067562 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067572 4731 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067582 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067591 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067620 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067629 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067638 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067647 4731 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067655 4731 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067663 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067672 4731 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067898 4731 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067915 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.068650 4731 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.068698 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.068717 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.068731 4731 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.067928 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.068050 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.068900 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069286 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069311 4731 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069327 4731 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069343 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069356 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069370 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069383 4731 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069397 4731 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069410 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069426 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069439 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069452 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069466 4731 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069479 4731 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069491 4731 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069504 4731 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069518 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069532 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069545 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069559 4731 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069573 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069588 4731 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069548 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069602 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069693 4731 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069707 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069719 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069750 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069764 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069778 4731 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069789 4731 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069777 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069800 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069880 4731 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069899 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069914 4731 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069932 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069946 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069966 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069982 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.069996 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.070011 4731 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.070025 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.070039 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.070051 4731 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.070124 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.071273 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.075014 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.075653 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.080267 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.080335 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.083103 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.083262 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.086021 4731 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.085651 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.087392 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:25.586170366 +0000 UTC m=+24.333304976 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.090346 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.091320 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.091373 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.091394 4731 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.091472 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:25.591447449 +0000 UTC m=+24.338582059 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.099959 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.101898 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.102877 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.111242 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.112217 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.113025 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.114685 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.125176 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.125385 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171207 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171279 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171325 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171342 4731 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171358 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171371 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171384 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171397 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171409 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171421 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171434 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171447 4731 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171459 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171472 4731 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171484 4731 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171496 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171489 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171508 4731 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171563 4731 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171582 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171594 4731 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171605 4731 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171618 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171627 4731 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171637 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171649 4731 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171659 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.171562 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.285761 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.292058 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.298655 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Nov 25 14:54:25 crc kubenswrapper[4731]: W1125 14:54:25.304267 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-99dcb5cac4cd5c73d93afa210c8ec7a4a5a404a2cc2c97257dbd7e55c792ace2 WatchSource:0}: Error finding container 99dcb5cac4cd5c73d93afa210c8ec7a4a5a404a2cc2c97257dbd7e55c792ace2: Status 404 returned error can't find the container with id 99dcb5cac4cd5c73d93afa210c8ec7a4a5a404a2cc2c97257dbd7e55c792ace2 Nov 25 14:54:25 crc kubenswrapper[4731]: W1125 14:54:25.314877 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-87f10d91292241e92d639160393d1219ab798a8aeca3790f4e057d455c621c99 WatchSource:0}: Error finding container 87f10d91292241e92d639160393d1219ab798a8aeca3790f4e057d455c621c99: Status 404 returned error can't find the container with id 87f10d91292241e92d639160393d1219ab798a8aeca3790f4e057d455c621c99 Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.575149 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.575240 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.575302 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.575383 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:54:26.575361014 +0000 UTC m=+25.322495444 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.575449 4731 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.575546 4731 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.575577 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:26.575547838 +0000 UTC m=+25.322682438 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.575669 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:26.575655881 +0000 UTC m=+25.322790481 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.659975 4731 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.662190 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.662245 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.662257 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.662360 4731 kubelet_node_status.go:76] "Attempting to register node" node="crc" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.675290 4731 kubelet_node_status.go:115] "Node was previously registered" node="crc" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.675726 4731 kubelet_node_status.go:79] "Successfully registered node" node="crc" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.676367 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.676417 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.676582 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.676608 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.676622 4731 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.676674 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:26.676659109 +0000 UTC m=+25.423793529 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.676728 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.676756 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.676771 4731 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.676925 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:26.676892875 +0000 UTC m=+25.424027465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.678070 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.678117 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.678128 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.678149 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.678162 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.703867 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.709618 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.709666 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.709675 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.709692 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.709704 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.723489 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.727377 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.727427 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.727438 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.727459 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.727469 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.740025 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.744326 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.744361 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.744369 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.744384 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.744393 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.754885 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.760282 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.760308 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.760316 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.760328 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.760337 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.770988 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:25 crc kubenswrapper[4731]: E1125 14:54:25.771218 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.773587 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.773672 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.773711 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.773741 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.773764 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.877743 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.877840 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.877861 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.877893 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.877918 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.980748 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.980869 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.980902 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.980936 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:25 crc kubenswrapper[4731]: I1125 14:54:25.980958 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:25Z","lastTransitionTime":"2025-11-25T14:54:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.015291 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.016349 4731 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.016460 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.022889 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.029536 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.033785 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.045713 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.058371 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.077440 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.084340 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.084381 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.084394 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.084417 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.084432 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.092687 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a"} Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.092742 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821"} Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.092755 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9dcead9d0602d1aba76f1cb8976f64de109be946f44cc046d63f27d3655a97b8"} Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.095196 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.095776 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.098659 4731 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe" exitCode=255 Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.098764 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe"} Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.098908 4731 scope.go:117] "RemoveContainer" containerID="c75ca277897bac2cc7773537dcf3f9d31cd14ad943e20b4ef62c84608422f906" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.099609 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.100496 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"87f10d91292241e92d639160393d1219ab798a8aeca3790f4e057d455c621c99"} Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.101890 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14"} Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.101932 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"99dcb5cac4cd5c73d93afa210c8ec7a4a5a404a2cc2c97257dbd7e55c792ace2"} Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.107420 4731 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.107661 4731 scope.go:117] "RemoveContainer" containerID="6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe" Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.107912 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.118259 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.132341 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.146112 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c75ca277897bac2cc7773537dcf3f9d31cd14ad943e20b4ef62c84608422f906\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:08Z\\\",\\\"message\\\":\\\"W1125 14:54:08.212420 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 14:54:08.212690 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764082448 cert, and key in /tmp/serving-cert-1766724377/serving-signer.crt, /tmp/serving-cert-1766724377/serving-signer.key\\\\nI1125 14:54:08.480431 1 observer_polling.go:159] Starting file observer\\\\nW1125 14:54:08.487604 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:54:08.487785 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:08.488556 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1766724377/tls.crt::/tmp/serving-cert-1766724377/tls.key\\\\\\\"\\\\nF1125 14:54:08.943897 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.158435 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.168420 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.185805 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.187296 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.187340 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.187350 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.187368 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.187380 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.195164 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.205494 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.289383 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.289426 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.289438 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.289454 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.289465 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.392141 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.392189 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.392202 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.392222 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.392234 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.494728 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.494785 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.494799 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.494835 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.494848 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.583555 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.583665 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.583699 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.583775 4731 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.583892 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:54:28.583767499 +0000 UTC m=+27.330901929 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.583944 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:28.583930263 +0000 UTC m=+27.331064693 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.583951 4731 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.584064 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:28.584035635 +0000 UTC m=+27.331170245 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.597704 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.597760 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.597773 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.597790 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.597802 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.684645 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.684702 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.684919 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.684938 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.684951 4731 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.684946 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.684992 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.685005 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:28.684988422 +0000 UTC m=+27.432122852 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.685009 4731 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.685082 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:28.685059564 +0000 UTC m=+27.432193994 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.700644 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.700682 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.700696 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.700714 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.700727 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.802728 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.802760 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.802768 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.802784 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.802794 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.812271 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.812370 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.812407 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.812441 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.812563 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:26 crc kubenswrapper[4731]: E1125 14:54:26.812752 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.815246 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.815802 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.816577 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.817259 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.817803 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.818284 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.818841 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.819372 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.821455 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.821973 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.822428 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.823460 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.824040 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.824912 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.825449 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.826489 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.827071 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.827466 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.828683 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.829348 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.829785 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.830722 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.831138 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.832091 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.832500 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.833573 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.834194 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.835081 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.835717 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.836247 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.837088 4731 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.837203 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.839353 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.842997 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.843404 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.845303 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.846391 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.846939 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.851759 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.852624 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.853097 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.855121 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.857239 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.857847 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.858643 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.859173 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.861328 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.862224 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.863072 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.863514 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.864152 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.865313 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.865879 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.868334 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.905347 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.905408 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.905418 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.905441 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.905457 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:26Z","lastTransitionTime":"2025-11-25T14:54:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.920638 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-tlrbx"] Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.920973 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tlrbx" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.925675 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.925899 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.926132 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.940670 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c75ca277897bac2cc7773537dcf3f9d31cd14ad943e20b4ef62c84608422f906\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:08Z\\\",\\\"message\\\":\\\"W1125 14:54:08.212420 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1125 14:54:08.212690 1 crypto.go:601] Generating new CA for check-endpoints-signer@1764082448 cert, and key in /tmp/serving-cert-1766724377/serving-signer.crt, /tmp/serving-cert-1766724377/serving-signer.key\\\\nI1125 14:54:08.480431 1 observer_polling.go:159] Starting file observer\\\\nW1125 14:54:08.487604 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1125 14:54:08.487785 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:08.488556 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1766724377/tls.crt::/tmp/serving-cert-1766724377/tls.key\\\\\\\"\\\\nF1125 14:54:08.943897 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:26Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.958317 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:26Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.975804 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:26Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.987770 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/032c7641-9dfb-4b82-8bfa-b82182de9c9c-hosts-file\") pod \"node-resolver-tlrbx\" (UID: \"032c7641-9dfb-4b82-8bfa-b82182de9c9c\") " pod="openshift-dns/node-resolver-tlrbx" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.987856 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf54q\" (UniqueName: \"kubernetes.io/projected/032c7641-9dfb-4b82-8bfa-b82182de9c9c-kube-api-access-rf54q\") pod \"node-resolver-tlrbx\" (UID: \"032c7641-9dfb-4b82-8bfa-b82182de9c9c\") " pod="openshift-dns/node-resolver-tlrbx" Nov 25 14:54:26 crc kubenswrapper[4731]: I1125 14:54:26.988739 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:26Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.002759 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:26Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.007962 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.008012 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.008024 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.008044 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.008056 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.014493 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.034072 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.046852 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.089306 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/032c7641-9dfb-4b82-8bfa-b82182de9c9c-hosts-file\") pod \"node-resolver-tlrbx\" (UID: \"032c7641-9dfb-4b82-8bfa-b82182de9c9c\") " pod="openshift-dns/node-resolver-tlrbx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.089385 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf54q\" (UniqueName: \"kubernetes.io/projected/032c7641-9dfb-4b82-8bfa-b82182de9c9c-kube-api-access-rf54q\") pod \"node-resolver-tlrbx\" (UID: \"032c7641-9dfb-4b82-8bfa-b82182de9c9c\") " pod="openshift-dns/node-resolver-tlrbx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.089540 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/032c7641-9dfb-4b82-8bfa-b82182de9c9c-hosts-file\") pod \"node-resolver-tlrbx\" (UID: \"032c7641-9dfb-4b82-8bfa-b82182de9c9c\") " pod="openshift-dns/node-resolver-tlrbx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.107420 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.109758 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.109798 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.109811 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.109847 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.109860 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.110513 4731 scope.go:117] "RemoveContainer" containerID="6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe" Nov 25 14:54:27 crc kubenswrapper[4731]: E1125 14:54:27.110674 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.110786 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf54q\" (UniqueName: \"kubernetes.io/projected/032c7641-9dfb-4b82-8bfa-b82182de9c9c-kube-api-access-rf54q\") pod \"node-resolver-tlrbx\" (UID: \"032c7641-9dfb-4b82-8bfa-b82182de9c9c\") " pod="openshift-dns/node-resolver-tlrbx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.125406 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.138108 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.152186 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.173211 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.197579 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.211874 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.212144 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.212227 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.212310 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.212383 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.214939 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.232294 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.232643 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-tlrbx" Nov 25 14:54:27 crc kubenswrapper[4731]: W1125 14:54:27.246299 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod032c7641_9dfb_4b82_8bfa_b82182de9c9c.slice/crio-968abaf8c4785a46e53f641dcd71140c887d17c63c0d78f3f65b6a53493a7a32 WatchSource:0}: Error finding container 968abaf8c4785a46e53f641dcd71140c887d17c63c0d78f3f65b6a53493a7a32: Status 404 returned error can't find the container with id 968abaf8c4785a46e53f641dcd71140c887d17c63c0d78f3f65b6a53493a7a32 Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.274354 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.315698 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.315746 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.315759 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.315775 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.315787 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.318951 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-8xx4w"] Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.319383 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.322400 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.322452 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.322960 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.324037 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.324787 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.325911 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-c2xb5"] Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.326555 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.327480 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hc8cx"] Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.328029 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-f6xlz"] Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.328295 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.328550 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: W1125 14:54:27.330673 4731 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 25 14:54:27 crc kubenswrapper[4731]: E1125 14:54:27.330725 4731 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:54:27 crc kubenswrapper[4731]: W1125 14:54:27.331611 4731 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Nov 25 14:54:27 crc kubenswrapper[4731]: E1125 14:54:27.331646 4731 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:54:27 crc kubenswrapper[4731]: W1125 14:54:27.340078 4731 reflector.go:561] object-"openshift-machine-config-operator"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 25 14:54:27 crc kubenswrapper[4731]: E1125 14:54:27.340152 4731 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:54:27 crc kubenswrapper[4731]: W1125 14:54:27.340227 4731 reflector.go:561] object-"openshift-machine-config-operator"/"proxy-tls": failed to list *v1.Secret: secrets "proxy-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 25 14:54:27 crc kubenswrapper[4731]: E1125 14:54:27.340239 4731 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"proxy-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"proxy-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:54:27 crc kubenswrapper[4731]: W1125 14:54:27.340436 4731 reflector.go:561] object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq": failed to list *v1.Secret: secrets "machine-config-daemon-dockercfg-r5tcq" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 25 14:54:27 crc kubenswrapper[4731]: E1125 14:54:27.340457 4731 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-r5tcq\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-config-daemon-dockercfg-r5tcq\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:54:27 crc kubenswrapper[4731]: W1125 14:54:27.340545 4731 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 25 14:54:27 crc kubenswrapper[4731]: E1125 14:54:27.340562 4731 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:54:27 crc kubenswrapper[4731]: W1125 14:54:27.340777 4731 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 25 14:54:27 crc kubenswrapper[4731]: E1125 14:54:27.340796 4731 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:54:27 crc kubenswrapper[4731]: W1125 14:54:27.340875 4731 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 25 14:54:27 crc kubenswrapper[4731]: W1125 14:54:27.340887 4731 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 25 14:54:27 crc kubenswrapper[4731]: E1125 14:54:27.340915 4731 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:54:27 crc kubenswrapper[4731]: E1125 14:54:27.340926 4731 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:54:27 crc kubenswrapper[4731]: W1125 14:54:27.341061 4731 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 25 14:54:27 crc kubenswrapper[4731]: E1125 14:54:27.341096 4731 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:54:27 crc kubenswrapper[4731]: W1125 14:54:27.342995 4731 reflector.go:561] object-"openshift-machine-config-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 25 14:54:27 crc kubenswrapper[4731]: E1125 14:54:27.343074 4731 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:54:27 crc kubenswrapper[4731]: W1125 14:54:27.351496 4731 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Nov 25 14:54:27 crc kubenswrapper[4731]: E1125 14:54:27.351556 4731 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:54:27 crc kubenswrapper[4731]: W1125 14:54:27.351664 4731 reflector.go:561] object-"openshift-machine-config-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-config-operator": no relationship found between node 'crc' and this object Nov 25 14:54:27 crc kubenswrapper[4731]: E1125 14:54:27.351686 4731 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-config-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.352138 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.368372 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391348 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fpl2\" (UniqueName: \"kubernetes.io/projected/5c10b238-57f1-4168-9c7c-1cfb0ca40525-kube-api-access-2fpl2\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391408 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f469eb10-0aba-4dba-a237-899e0b8dabca-mcd-auth-proxy-config\") pod \"machine-config-daemon-f6xlz\" (UID: \"f469eb10-0aba-4dba-a237-899e0b8dabca\") " pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391430 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-systemd\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391453 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f469eb10-0aba-4dba-a237-899e0b8dabca-proxy-tls\") pod \"machine-config-daemon-f6xlz\" (UID: \"f469eb10-0aba-4dba-a237-899e0b8dabca\") " pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391475 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-multus-socket-dir-parent\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391493 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-run-netns\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391510 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5c10b238-57f1-4168-9c7c-1cfb0ca40525-os-release\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391529 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovnkube-script-lib\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391549 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-var-lib-cni-multus\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391569 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-hostroot\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391592 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-multus-conf-dir\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391624 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-kubelet\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391641 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5c10b238-57f1-4168-9c7c-1cfb0ca40525-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391662 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdcdt\" (UniqueName: \"kubernetes.io/projected/2fa0e665-3cc9-4bb9-9758-75bb8db39513-kube-api-access-xdcdt\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391679 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-os-release\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391698 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-run-k8s-cni-cncf-io\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391720 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-var-lib-cni-bin\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391738 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5c10b238-57f1-4168-9c7c-1cfb0ca40525-cnibin\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391755 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-run-ovn-kubernetes\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391779 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl9sh\" (UniqueName: \"kubernetes.io/projected/062d0f18-096f-4f99-9faa-74dbd454bcd0-kube-api-access-zl9sh\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391802 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-log-socket\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391841 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-var-lib-kubelet\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391862 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-etc-kubernetes\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391883 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-etc-openvswitch\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391900 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-cni-netd\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391920 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5c10b238-57f1-4168-9c7c-1cfb0ca40525-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391938 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5c10b238-57f1-4168-9c7c-1cfb0ca40525-system-cni-dir\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391963 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-cnibin\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.391983 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-node-log\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392002 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-cni-bin\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392064 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-multus-cni-dir\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392085 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/062d0f18-096f-4f99-9faa-74dbd454bcd0-multus-daemon-config\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392107 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392142 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/062d0f18-096f-4f99-9faa-74dbd454bcd0-cni-binary-copy\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392164 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-slash\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392188 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovn-node-metrics-cert\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392209 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5c10b238-57f1-4168-9c7c-1cfb0ca40525-cni-binary-copy\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392229 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f469eb10-0aba-4dba-a237-899e0b8dabca-rootfs\") pod \"machine-config-daemon-f6xlz\" (UID: \"f469eb10-0aba-4dba-a237-899e0b8dabca\") " pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392249 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovnkube-config\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392271 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztxkf\" (UniqueName: \"kubernetes.io/projected/f469eb10-0aba-4dba-a237-899e0b8dabca-kube-api-access-ztxkf\") pod \"machine-config-daemon-f6xlz\" (UID: \"f469eb10-0aba-4dba-a237-899e0b8dabca\") " pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392295 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-systemd-units\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392329 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-run-multus-certs\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392350 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-openvswitch\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392369 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-ovn\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392389 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-env-overrides\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392408 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-system-cni-dir\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392426 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-var-lib-openvswitch\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.392446 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-run-netns\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.420198 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.421983 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.422032 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.422043 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.422064 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.422076 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.435352 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.452198 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.466594 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.482763 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.484882 4731 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.493739 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-var-lib-cni-multus\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.493776 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-hostroot\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.493798 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-multus-conf-dir\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.493845 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-kubelet\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.493866 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovnkube-script-lib\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.493888 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5c10b238-57f1-4168-9c7c-1cfb0ca40525-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.493904 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdcdt\" (UniqueName: \"kubernetes.io/projected/2fa0e665-3cc9-4bb9-9758-75bb8db39513-kube-api-access-xdcdt\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.493924 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-var-lib-cni-bin\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.493944 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-var-lib-cni-multus\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.493973 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-kubelet\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494049 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5c10b238-57f1-4168-9c7c-1cfb0ca40525-cnibin\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494035 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-var-lib-cni-bin\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.493985 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/5c10b238-57f1-4168-9c7c-1cfb0ca40525-cnibin\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494100 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-multus-conf-dir\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494292 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-run-ovn-kubernetes\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494393 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-run-ovn-kubernetes\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494431 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-os-release\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494490 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-os-release\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494501 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-run-k8s-cni-cncf-io\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494569 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl9sh\" (UniqueName: \"kubernetes.io/projected/062d0f18-096f-4f99-9faa-74dbd454bcd0-kube-api-access-zl9sh\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494593 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-run-k8s-cni-cncf-io\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494594 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-log-socket\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494626 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-var-lib-kubelet\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494647 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-etc-kubernetes\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494671 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-etc-openvswitch\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494694 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-cni-netd\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494725 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5c10b238-57f1-4168-9c7c-1cfb0ca40525-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494754 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5c10b238-57f1-4168-9c7c-1cfb0ca40525-system-cni-dir\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494776 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-node-log\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494793 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovnkube-script-lib\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494797 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-cni-bin\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494857 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-cni-bin\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494878 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-cnibin\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494875 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/5c10b238-57f1-4168-9c7c-1cfb0ca40525-tuning-conf-dir\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494909 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-log-socket\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494931 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-cnibin\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494949 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-var-lib-kubelet\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494968 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/5c10b238-57f1-4168-9c7c-1cfb0ca40525-system-cni-dir\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494968 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-cni-netd\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494987 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-node-log\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494931 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-multus-cni-dir\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495013 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-etc-openvswitch\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495024 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/062d0f18-096f-4f99-9faa-74dbd454bcd0-multus-daemon-config\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.494991 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-etc-kubernetes\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495045 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495106 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-hostroot\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495124 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-multus-cni-dir\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495130 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/062d0f18-096f-4f99-9faa-74dbd454bcd0-cni-binary-copy\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495176 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-slash\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495182 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495198 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovn-node-metrics-cert\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495242 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-slash\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495364 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5c10b238-57f1-4168-9c7c-1cfb0ca40525-cni-binary-copy\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495393 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f469eb10-0aba-4dba-a237-899e0b8dabca-rootfs\") pod \"machine-config-daemon-f6xlz\" (UID: \"f469eb10-0aba-4dba-a237-899e0b8dabca\") " pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495415 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovnkube-config\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495456 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-run-multus-certs\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495481 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztxkf\" (UniqueName: \"kubernetes.io/projected/f469eb10-0aba-4dba-a237-899e0b8dabca-kube-api-access-ztxkf\") pod \"machine-config-daemon-f6xlz\" (UID: \"f469eb10-0aba-4dba-a237-899e0b8dabca\") " pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495504 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-systemd-units\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495529 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-env-overrides\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495552 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-openvswitch\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495575 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-ovn\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495599 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-system-cni-dir\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495624 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-var-lib-openvswitch\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495651 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-run-netns\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495677 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fpl2\" (UniqueName: \"kubernetes.io/projected/5c10b238-57f1-4168-9c7c-1cfb0ca40525-kube-api-access-2fpl2\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495702 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f469eb10-0aba-4dba-a237-899e0b8dabca-mcd-auth-proxy-config\") pod \"machine-config-daemon-f6xlz\" (UID: \"f469eb10-0aba-4dba-a237-899e0b8dabca\") " pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495729 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-systemd\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495742 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/062d0f18-096f-4f99-9faa-74dbd454bcd0-cni-binary-copy\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495746 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/062d0f18-096f-4f99-9faa-74dbd454bcd0-multus-daemon-config\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495772 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-multus-socket-dir-parent\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495800 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-run-netns\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495855 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5c10b238-57f1-4168-9c7c-1cfb0ca40525-os-release\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495884 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f469eb10-0aba-4dba-a237-899e0b8dabca-proxy-tls\") pod \"machine-config-daemon-f6xlz\" (UID: \"f469eb10-0aba-4dba-a237-899e0b8dabca\") " pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.496004 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-systemd-units\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.496039 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-ovn\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.496005 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-openvswitch\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.496086 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-systemd\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.496112 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-var-lib-openvswitch\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.496134 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-run-netns\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.496202 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-system-cni-dir\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.496262 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-multus-socket-dir-parent\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.496274 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/5c10b238-57f1-4168-9c7c-1cfb0ca40525-cni-binary-copy\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.496296 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-run-netns\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.495804 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/062d0f18-096f-4f99-9faa-74dbd454bcd0-host-run-multus-certs\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.496311 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f469eb10-0aba-4dba-a237-899e0b8dabca-rootfs\") pod \"machine-config-daemon-f6xlz\" (UID: \"f469eb10-0aba-4dba-a237-899e0b8dabca\") " pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.496362 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/5c10b238-57f1-4168-9c7c-1cfb0ca40525-os-release\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.501618 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.503108 4731 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.515617 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.516469 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl9sh\" (UniqueName: \"kubernetes.io/projected/062d0f18-096f-4f99-9faa-74dbd454bcd0-kube-api-access-zl9sh\") pod \"multus-8xx4w\" (UID: \"062d0f18-096f-4f99-9faa-74dbd454bcd0\") " pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.516469 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fpl2\" (UniqueName: \"kubernetes.io/projected/5c10b238-57f1-4168-9c7c-1cfb0ca40525-kube-api-access-2fpl2\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.524485 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.524543 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.524556 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.524578 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.524596 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.526030 4731 csr.go:261] certificate signing request csr-mzg8n is approved, waiting to be issued Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.534496 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.535251 4731 csr.go:257] certificate signing request csr-mzg8n is issued Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.553416 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.566655 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.583470 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.595993 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.611556 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.624322 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.627288 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.627329 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.627343 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.627362 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.627374 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.634447 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8xx4w" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.641358 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: W1125 14:54:27.645062 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod062d0f18_096f_4f99_9faa_74dbd454bcd0.slice/crio-f1f843a8bd99c27d17edfc8ce0b7b609d37013ba23f054b2b8303d08d3d1ebd9 WatchSource:0}: Error finding container f1f843a8bd99c27d17edfc8ce0b7b609d37013ba23f054b2b8303d08d3d1ebd9: Status 404 returned error can't find the container with id f1f843a8bd99c27d17edfc8ce0b7b609d37013ba23f054b2b8303d08d3d1ebd9 Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.664318 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.685255 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.704447 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.716429 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.729994 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.730044 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.730054 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.730071 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.730082 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.733584 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.832575 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.832611 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.832620 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.832636 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.832646 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.935545 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.935583 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.935592 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.935607 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.935616 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:27Z","lastTransitionTime":"2025-11-25T14:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.970543 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.974662 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.981425 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Nov 25 14:54:27 crc kubenswrapper[4731]: I1125 14:54:27.985763 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.002135 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.018445 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.035876 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.037949 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.037985 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.038001 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.038022 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.038035 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.057430 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.100922 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.114737 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4"} Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.115950 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8xx4w" event={"ID":"062d0f18-096f-4f99-9faa-74dbd454bcd0","Type":"ContainerStarted","Data":"b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac"} Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.116020 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8xx4w" event={"ID":"062d0f18-096f-4f99-9faa-74dbd454bcd0","Type":"ContainerStarted","Data":"f1f843a8bd99c27d17edfc8ce0b7b609d37013ba23f054b2b8303d08d3d1ebd9"} Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.117171 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tlrbx" event={"ID":"032c7641-9dfb-4b82-8bfa-b82182de9c9c","Type":"ContainerStarted","Data":"7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406"} Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.117251 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-tlrbx" event={"ID":"032c7641-9dfb-4b82-8bfa-b82182de9c9c","Type":"ContainerStarted","Data":"968abaf8c4785a46e53f641dcd71140c887d17c63c0d78f3f65b6a53493a7a32"} Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.140534 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.140586 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.140599 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.140621 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.140634 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.163752 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.192661 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.208236 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.243342 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.243386 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.243397 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.243418 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.243431 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.251386 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.271502 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.283001 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.297663 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.309450 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.327888 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.346371 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.346410 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.346449 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.346459 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.346476 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.346489 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.357928 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.361984 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.379575 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.389881 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.394726 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.397456 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-env-overrides\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.409229 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.422766 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.434782 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.435716 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.436461 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/5c10b238-57f1-4168-9c7c-1cfb0ca40525-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-c2xb5\" (UID: \"5c10b238-57f1-4168-9c7c-1cfb0ca40525\") " pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.440640 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.448078 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f469eb10-0aba-4dba-a237-899e0b8dabca-mcd-auth-proxy-config\") pod \"machine-config-daemon-f6xlz\" (UID: \"f469eb10-0aba-4dba-a237-899e0b8dabca\") " pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.449354 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.449402 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.449412 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.449433 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.449447 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.449992 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.463340 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.485425 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:28Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.487213 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.488106 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.495636 4731 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.495761 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovn-node-metrics-cert podName:2fa0e665-3cc9-4bb9-9758-75bb8db39513 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:28.995735374 +0000 UTC m=+27.742869804 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovn-node-metrics-cert") pod "ovnkube-node-hc8cx" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513") : failed to sync secret cache: timed out waiting for the condition Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.495955 4731 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.496055 4731 secret.go:188] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.496072 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdcdt\" (UniqueName: \"kubernetes.io/projected/2fa0e665-3cc9-4bb9-9758-75bb8db39513-kube-api-access-xdcdt\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.496089 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovnkube-config podName:2fa0e665-3cc9-4bb9-9758-75bb8db39513 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:28.996059382 +0000 UTC m=+27.743193872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovnkube-config") pod "ovnkube-node-hc8cx" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513") : failed to sync configmap cache: timed out waiting for the condition Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.496116 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f469eb10-0aba-4dba-a237-899e0b8dabca-proxy-tls podName:f469eb10-0aba-4dba-a237-899e0b8dabca nodeName:}" failed. No retries permitted until 2025-11-25 14:54:28.996108044 +0000 UTC m=+27.743242584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f469eb10-0aba-4dba-a237-899e0b8dabca-proxy-tls") pod "machine-config-daemon-f6xlz" (UID: "f469eb10-0aba-4dba-a237-899e0b8dabca") : failed to sync secret cache: timed out waiting for the condition Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.536524 4731 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-11-25 14:49:27 +0000 UTC, rotation deadline is 2026-08-22 05:01:12.424098887 +0000 UTC Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.536628 4731 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6470h6m43.887473871s for next certificate rotation Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.548894 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.552063 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.552107 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.552119 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.552139 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.552151 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.574449 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.578715 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" Nov 25 14:54:28 crc kubenswrapper[4731]: W1125 14:54:28.591466 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c10b238_57f1_4168_9c7c_1cfb0ca40525.slice/crio-3ebba19b58feb25f4b2cf3a61d70794461ca095ffd6894962ad9ac729f7768b3 WatchSource:0}: Error finding container 3ebba19b58feb25f4b2cf3a61d70794461ca095ffd6894962ad9ac729f7768b3: Status 404 returned error can't find the container with id 3ebba19b58feb25f4b2cf3a61d70794461ca095ffd6894962ad9ac729f7768b3 Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.612003 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.612220 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.612271 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.612323 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:54:32.612285604 +0000 UTC m=+31.359420034 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.612356 4731 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.612356 4731 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.612427 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:32.612407167 +0000 UTC m=+31.359541807 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.612471 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:32.612463598 +0000 UTC m=+31.359598018 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.629178 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.655385 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.655437 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.655450 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.655472 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.655491 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.656224 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.666785 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.714018 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.714074 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.714298 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.714328 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.714344 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.714363 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.714374 4731 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.714380 4731 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.714454 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:32.71443358 +0000 UTC m=+31.461568010 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.714478 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:32.714467631 +0000 UTC m=+31.461602061 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.724119 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.757672 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.757721 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.757735 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.757758 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.757771 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.810689 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.810783 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.810801 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.810859 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.810972 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:28 crc kubenswrapper[4731]: E1125 14:54:28.811167 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.817602 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.827740 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztxkf\" (UniqueName: \"kubernetes.io/projected/f469eb10-0aba-4dba-a237-899e0b8dabca-kube-api-access-ztxkf\") pod \"machine-config-daemon-f6xlz\" (UID: \"f469eb10-0aba-4dba-a237-899e0b8dabca\") " pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.859995 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.860060 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.860074 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.860094 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.860523 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.963141 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.963196 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.963208 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.963228 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:28 crc kubenswrapper[4731]: I1125 14:54:28.963242 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:28Z","lastTransitionTime":"2025-11-25T14:54:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.017041 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovn-node-metrics-cert\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.017116 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovnkube-config\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.017194 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f469eb10-0aba-4dba-a237-899e0b8dabca-proxy-tls\") pod \"machine-config-daemon-f6xlz\" (UID: \"f469eb10-0aba-4dba-a237-899e0b8dabca\") " pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.017949 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovnkube-config\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.020543 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f469eb10-0aba-4dba-a237-899e0b8dabca-proxy-tls\") pod \"machine-config-daemon-f6xlz\" (UID: \"f469eb10-0aba-4dba-a237-899e0b8dabca\") " pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.020562 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovn-node-metrics-cert\") pod \"ovnkube-node-hc8cx\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.066092 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.066136 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.066145 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.066161 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.066171 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.129877 4731 generic.go:334] "Generic (PLEG): container finished" podID="5c10b238-57f1-4168-9c7c-1cfb0ca40525" containerID="502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8" exitCode=0 Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.129986 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" event={"ID":"5c10b238-57f1-4168-9c7c-1cfb0ca40525","Type":"ContainerDied","Data":"502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8"} Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.130070 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" event={"ID":"5c10b238-57f1-4168-9c7c-1cfb0ca40525","Type":"ContainerStarted","Data":"3ebba19b58feb25f4b2cf3a61d70794461ca095ffd6894962ad9ac729f7768b3"} Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.150732 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.166490 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.169733 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.169776 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.169791 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.169842 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.169857 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.173310 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.182452 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: W1125 14:54:29.188393 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf469eb10_0aba_4dba_a237_899e0b8dabca.slice/crio-0429aefb4c773124be07681b8d3f4f99d48394fb61bae0fdd58877aa29220c4c WatchSource:0}: Error finding container 0429aefb4c773124be07681b8d3f4f99d48394fb61bae0fdd58877aa29220c4c: Status 404 returned error can't find the container with id 0429aefb4c773124be07681b8d3f4f99d48394fb61bae0fdd58877aa29220c4c Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.197785 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.199971 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.211236 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.223728 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: W1125 14:54:29.223867 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fa0e665_3cc9_4bb9_9758_75bb8db39513.slice/crio-8c041fd560217950e94869b93c1f216f39d6ebe796d4647e8c5649613919fc7d WatchSource:0}: Error finding container 8c041fd560217950e94869b93c1f216f39d6ebe796d4647e8c5649613919fc7d: Status 404 returned error can't find the container with id 8c041fd560217950e94869b93c1f216f39d6ebe796d4647e8c5649613919fc7d Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.239851 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.251343 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.266190 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.271854 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.271889 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.271898 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.271918 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.271931 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.283924 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.290608 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7wt6q"] Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.291345 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7wt6q" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.292967 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.294301 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.294494 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.296445 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.300192 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.316795 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.340303 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.356107 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.375429 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.375695 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.375772 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.375862 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.375939 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.377563 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.392643 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.409346 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.422222 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/700e6303-2fdb-4127-9fd3-b9dbaa412587-serviceca\") pod \"node-ca-7wt6q\" (UID: \"700e6303-2fdb-4127-9fd3-b9dbaa412587\") " pod="openshift-image-registry/node-ca-7wt6q" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.422301 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/700e6303-2fdb-4127-9fd3-b9dbaa412587-host\") pod \"node-ca-7wt6q\" (UID: \"700e6303-2fdb-4127-9fd3-b9dbaa412587\") " pod="openshift-image-registry/node-ca-7wt6q" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.422343 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj6wf\" (UniqueName: \"kubernetes.io/projected/700e6303-2fdb-4127-9fd3-b9dbaa412587-kube-api-access-nj6wf\") pod \"node-ca-7wt6q\" (UID: \"700e6303-2fdb-4127-9fd3-b9dbaa412587\") " pod="openshift-image-registry/node-ca-7wt6q" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.423780 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.449785 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.474860 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.478995 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.479039 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.479049 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.479065 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.479077 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.499673 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.520029 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.523067 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj6wf\" (UniqueName: \"kubernetes.io/projected/700e6303-2fdb-4127-9fd3-b9dbaa412587-kube-api-access-nj6wf\") pod \"node-ca-7wt6q\" (UID: \"700e6303-2fdb-4127-9fd3-b9dbaa412587\") " pod="openshift-image-registry/node-ca-7wt6q" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.523124 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/700e6303-2fdb-4127-9fd3-b9dbaa412587-serviceca\") pod \"node-ca-7wt6q\" (UID: \"700e6303-2fdb-4127-9fd3-b9dbaa412587\") " pod="openshift-image-registry/node-ca-7wt6q" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.523159 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/700e6303-2fdb-4127-9fd3-b9dbaa412587-host\") pod \"node-ca-7wt6q\" (UID: \"700e6303-2fdb-4127-9fd3-b9dbaa412587\") " pod="openshift-image-registry/node-ca-7wt6q" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.523227 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/700e6303-2fdb-4127-9fd3-b9dbaa412587-host\") pod \"node-ca-7wt6q\" (UID: \"700e6303-2fdb-4127-9fd3-b9dbaa412587\") " pod="openshift-image-registry/node-ca-7wt6q" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.524234 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/700e6303-2fdb-4127-9fd3-b9dbaa412587-serviceca\") pod \"node-ca-7wt6q\" (UID: \"700e6303-2fdb-4127-9fd3-b9dbaa412587\") " pod="openshift-image-registry/node-ca-7wt6q" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.535944 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.545784 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj6wf\" (UniqueName: \"kubernetes.io/projected/700e6303-2fdb-4127-9fd3-b9dbaa412587-kube-api-access-nj6wf\") pod \"node-ca-7wt6q\" (UID: \"700e6303-2fdb-4127-9fd3-b9dbaa412587\") " pod="openshift-image-registry/node-ca-7wt6q" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.549521 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.562374 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.573125 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.583124 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.583186 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.583202 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.583226 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.583241 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.588241 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:29Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.605430 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7wt6q" Nov 25 14:54:29 crc kubenswrapper[4731]: W1125 14:54:29.616606 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod700e6303_2fdb_4127_9fd3_b9dbaa412587.slice/crio-4183c177226ffa5af187d02c0736637d403316ec65421fad9faa99ac1f41e86d WatchSource:0}: Error finding container 4183c177226ffa5af187d02c0736637d403316ec65421fad9faa99ac1f41e86d: Status 404 returned error can't find the container with id 4183c177226ffa5af187d02c0736637d403316ec65421fad9faa99ac1f41e86d Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.686679 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.686745 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.686762 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.686786 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.686800 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.789845 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.789889 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.789899 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.789916 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.789930 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.892610 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.892653 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.892664 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.892682 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.892692 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.995244 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.995290 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.995301 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.995323 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:29 crc kubenswrapper[4731]: I1125 14:54:29.995337 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:29Z","lastTransitionTime":"2025-11-25T14:54:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.098515 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.098577 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.098591 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.098615 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.098633 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.134063 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7wt6q" event={"ID":"700e6303-2fdb-4127-9fd3-b9dbaa412587","Type":"ContainerStarted","Data":"4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.134119 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7wt6q" event={"ID":"700e6303-2fdb-4127-9fd3-b9dbaa412587","Type":"ContainerStarted","Data":"4183c177226ffa5af187d02c0736637d403316ec65421fad9faa99ac1f41e86d"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.135455 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.135502 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.135514 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"0429aefb4c773124be07681b8d3f4f99d48394fb61bae0fdd58877aa29220c4c"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.136888 4731 generic.go:334] "Generic (PLEG): container finished" podID="5c10b238-57f1-4168-9c7c-1cfb0ca40525" containerID="b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf" exitCode=0 Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.136948 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" event={"ID":"5c10b238-57f1-4168-9c7c-1cfb0ca40525","Type":"ContainerDied","Data":"b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.138056 4731 generic.go:334] "Generic (PLEG): container finished" podID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerID="b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb" exitCode=0 Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.138083 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerDied","Data":"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.138101 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerStarted","Data":"8c041fd560217950e94869b93c1f216f39d6ebe796d4647e8c5649613919fc7d"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.153437 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.170842 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.185530 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.198012 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.200674 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.200718 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.200728 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.200747 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.200772 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.210632 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.220471 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.242060 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.255652 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.269471 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.285277 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.303501 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.303567 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.303579 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.303600 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.303580 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.303612 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.318402 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.332876 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.346223 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.360940 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.383641 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.396249 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.406351 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.406395 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.406403 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.406423 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.406434 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.406860 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.421200 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.434428 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.445152 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.454901 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.468136 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.479428 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.492031 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.503730 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.508059 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.508100 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.508110 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.508127 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.508136 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.518284 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.532696 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.610987 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.611032 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.611044 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.611063 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.611073 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.714001 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.714040 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.714048 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.714062 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.714071 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.810975 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.811016 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:30 crc kubenswrapper[4731]: E1125 14:54:30.811121 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:30 crc kubenswrapper[4731]: E1125 14:54:30.811296 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.810971 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:30 crc kubenswrapper[4731]: E1125 14:54:30.811438 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.816101 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.816153 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.816164 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.816179 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.816189 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.919120 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.919190 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.919201 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.919225 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:30 crc kubenswrapper[4731]: I1125 14:54:30.919260 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:30Z","lastTransitionTime":"2025-11-25T14:54:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.023733 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.024189 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.024198 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.024218 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.024227 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.127312 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.127523 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.127622 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.127735 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.128127 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.146766 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerStarted","Data":"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091"} Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.147005 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerStarted","Data":"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3"} Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.147728 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerStarted","Data":"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d"} Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.147780 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerStarted","Data":"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38"} Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.147794 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerStarted","Data":"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f"} Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.150850 4731 generic.go:334] "Generic (PLEG): container finished" podID="5c10b238-57f1-4168-9c7c-1cfb0ca40525" containerID="a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411" exitCode=0 Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.150896 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" event={"ID":"5c10b238-57f1-4168-9c7c-1cfb0ca40525","Type":"ContainerDied","Data":"a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411"} Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.173730 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.193093 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.251457 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.258169 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.258198 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.258209 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.258227 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.258238 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.273285 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.287360 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.304266 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.318598 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.334405 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.346651 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.361141 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.361973 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.362016 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.362032 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.362053 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.362066 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.382152 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.393968 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.403956 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.422236 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.464287 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.464375 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.464393 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.464415 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.464429 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.567465 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.567543 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.567563 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.567593 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.567614 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.670710 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.670758 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.670768 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.670784 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.670795 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.774064 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.774141 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.774159 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.774188 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.774209 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.877924 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.877981 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.877992 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.878013 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.878026 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.881338 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.882088 4731 scope.go:117] "RemoveContainer" containerID="6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe" Nov 25 14:54:31 crc kubenswrapper[4731]: E1125 14:54:31.882273 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.980292 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.980336 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.980347 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.980366 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:31 crc kubenswrapper[4731]: I1125 14:54:31.980376 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:31Z","lastTransitionTime":"2025-11-25T14:54:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.082544 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.082595 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.082607 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.082627 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.082642 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.158169 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerStarted","Data":"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8"} Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.161269 4731 generic.go:334] "Generic (PLEG): container finished" podID="5c10b238-57f1-4168-9c7c-1cfb0ca40525" containerID="4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7" exitCode=0 Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.161324 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" event={"ID":"5c10b238-57f1-4168-9c7c-1cfb0ca40525","Type":"ContainerDied","Data":"4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7"} Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.185778 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.185832 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.185843 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.185858 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.185868 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.187532 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.200167 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.217631 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.232740 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.244851 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.256977 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.268875 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.283274 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.287987 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.288018 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.288026 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.288041 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.288050 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.294510 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.310887 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.324595 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.337312 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.352681 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.376324 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.390647 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.390687 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.390701 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.390720 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.390733 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.493017 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.493053 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.493062 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.493077 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.493088 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.517437 4731 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.594951 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.595002 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.595014 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.595032 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.595044 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.664443 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.664569 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.664609 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:32 crc kubenswrapper[4731]: E1125 14:54:32.664636 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:54:40.664610083 +0000 UTC m=+39.411744523 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:54:32 crc kubenswrapper[4731]: E1125 14:54:32.664717 4731 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:54:32 crc kubenswrapper[4731]: E1125 14:54:32.664771 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:40.664758627 +0000 UTC m=+39.411893057 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:54:32 crc kubenswrapper[4731]: E1125 14:54:32.664783 4731 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:54:32 crc kubenswrapper[4731]: E1125 14:54:32.664900 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:40.66487512 +0000 UTC m=+39.412009720 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.702376 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.702428 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.702442 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.702461 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.702472 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.765349 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.765408 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:32 crc kubenswrapper[4731]: E1125 14:54:32.765598 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:54:32 crc kubenswrapper[4731]: E1125 14:54:32.765641 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:54:32 crc kubenswrapper[4731]: E1125 14:54:32.765657 4731 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:32 crc kubenswrapper[4731]: E1125 14:54:32.765602 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:54:32 crc kubenswrapper[4731]: E1125 14:54:32.765731 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:54:32 crc kubenswrapper[4731]: E1125 14:54:32.765744 4731 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:32 crc kubenswrapper[4731]: E1125 14:54:32.765717 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:40.765699633 +0000 UTC m=+39.512834063 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:32 crc kubenswrapper[4731]: E1125 14:54:32.765806 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:40.765791965 +0000 UTC m=+39.512926395 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.804838 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.804872 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.804883 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.804899 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.804912 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.810689 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.810695 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:32 crc kubenswrapper[4731]: E1125 14:54:32.810798 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.810710 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:32 crc kubenswrapper[4731]: E1125 14:54:32.811089 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:32 crc kubenswrapper[4731]: E1125 14:54:32.811116 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.825178 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.840069 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.852217 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.864794 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.878198 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.890827 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.901634 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.906734 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.906770 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.906783 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.906800 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.906832 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:32Z","lastTransitionTime":"2025-11-25T14:54:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.921161 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.950958 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.979067 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:32 crc kubenswrapper[4731]: I1125 14:54:32.998497 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.009579 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.009614 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.009626 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.009641 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.009653 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.013077 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.025098 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.042245 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.112010 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.112103 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.112117 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.112133 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.112143 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.169624 4731 generic.go:334] "Generic (PLEG): container finished" podID="5c10b238-57f1-4168-9c7c-1cfb0ca40525" containerID="027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4" exitCode=0 Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.169697 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" event={"ID":"5c10b238-57f1-4168-9c7c-1cfb0ca40525","Type":"ContainerDied","Data":"027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4"} Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.184352 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.198059 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.213213 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.223372 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.223464 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.223478 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.223498 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.223511 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.235053 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.251065 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.263203 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.276066 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.291660 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.305055 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.318116 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.326180 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.326226 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.326241 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.326264 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.326279 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.332660 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.347548 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.363271 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.382252 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.428850 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.428889 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.428901 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.428917 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.428928 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.531381 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.531421 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.531434 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.531454 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.531466 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.633902 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.633937 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.633948 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.633963 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.633974 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.737312 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.737643 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.737655 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.737676 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.737691 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.840072 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.840126 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.840137 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.840157 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.840169 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.942812 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.942895 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.942906 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.942923 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:33 crc kubenswrapper[4731]: I1125 14:54:33.942934 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:33Z","lastTransitionTime":"2025-11-25T14:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.046413 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.046470 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.046484 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.046506 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.046521 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.149599 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.149655 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.149667 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.149688 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.149710 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.177937 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerStarted","Data":"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751"} Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.182890 4731 generic.go:334] "Generic (PLEG): container finished" podID="5c10b238-57f1-4168-9c7c-1cfb0ca40525" containerID="3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b" exitCode=0 Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.182954 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" event={"ID":"5c10b238-57f1-4168-9c7c-1cfb0ca40525","Type":"ContainerDied","Data":"3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b"} Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.206121 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:34Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.230277 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:34Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.247226 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:34Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.252042 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.252077 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.252091 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.252110 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.252127 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.261855 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:34Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.278044 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:34Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.294711 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:34Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.305795 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:34Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.318274 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:34Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.336279 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:34Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.354990 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.355045 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.355057 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.355078 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.355091 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.362040 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:34Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.377361 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:34Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.390879 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:34Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.406689 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:34Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.419065 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:34Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.458311 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.458345 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.458355 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.458373 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.458383 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.562384 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.562447 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.562471 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.562504 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.562531 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.667067 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.667127 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.667137 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.667159 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.667171 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.769995 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.770071 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.770093 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.770128 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.770151 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.811134 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:34 crc kubenswrapper[4731]: E1125 14:54:34.811327 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.811428 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:34 crc kubenswrapper[4731]: E1125 14:54:34.811657 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.811732 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:34 crc kubenswrapper[4731]: E1125 14:54:34.811918 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.873021 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.873078 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.873089 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.873107 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.873121 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.975642 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.975693 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.975705 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.975723 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:34 crc kubenswrapper[4731]: I1125 14:54:34.975737 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:34Z","lastTransitionTime":"2025-11-25T14:54:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.078336 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.078400 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.078419 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.078449 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.078464 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.180760 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.180800 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.180838 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.180854 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.180864 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.189554 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" event={"ID":"5c10b238-57f1-4168-9c7c-1cfb0ca40525","Type":"ContainerStarted","Data":"9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad"} Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.208505 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.219265 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.234342 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.246966 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.257659 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.269633 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.284907 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.284981 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.284997 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.285022 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.285034 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.290462 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.305079 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.321772 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.334749 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.349071 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.362578 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.374560 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.388372 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.388411 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.388421 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.388437 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.388450 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.398972 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.491727 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.491804 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.491863 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.491895 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.491916 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.595195 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.595272 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.595292 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.595320 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.595341 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.698173 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.698273 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.698308 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.698346 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.698371 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.801530 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.801849 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.801934 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.802051 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.802143 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.835047 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.835306 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.835446 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.835588 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.835720 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4731]: E1125 14:54:35.849602 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.855338 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.855393 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.855402 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.855420 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.855431 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4731]: E1125 14:54:35.868662 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.872374 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.872416 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.872432 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.872453 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.872467 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4731]: E1125 14:54:35.886178 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.890496 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.890540 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.890551 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.890569 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.890581 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4731]: E1125 14:54:35.906370 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.911395 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.911461 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.911471 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.911489 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.911500 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:35 crc kubenswrapper[4731]: E1125 14:54:35.926301 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:35Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:35 crc kubenswrapper[4731]: E1125 14:54:35.926474 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.928879 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.929025 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.929107 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.929184 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:35 crc kubenswrapper[4731]: I1125 14:54:35.929349 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:35Z","lastTransitionTime":"2025-11-25T14:54:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.038574 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.038630 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.038644 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.038669 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.038713 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:36Z","lastTransitionTime":"2025-11-25T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.141846 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.141880 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.141889 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.141905 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.141917 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:36Z","lastTransitionTime":"2025-11-25T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.196374 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerStarted","Data":"a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560"} Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.196881 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.213101 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.230759 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.243582 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.244558 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.244591 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.244603 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.244622 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.244632 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:36Z","lastTransitionTime":"2025-11-25T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.249189 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.259499 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.273090 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.288489 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.302946 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.322762 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.325860 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.347653 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.347710 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.347721 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.347742 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.347754 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:36Z","lastTransitionTime":"2025-11-25T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.357779 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.374002 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.387699 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.405280 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.418028 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.429208 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.444207 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.450289 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.450355 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.450369 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.450386 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.450397 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:36Z","lastTransitionTime":"2025-11-25T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.459522 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.472931 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.487683 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.502639 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.514721 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.526097 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.535742 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.549137 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.553402 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.553444 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.553455 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.553473 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.553485 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:36Z","lastTransitionTime":"2025-11-25T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.562768 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.586227 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.606151 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.621942 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.634795 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:36Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.656363 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.656409 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.656425 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.656447 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.656460 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:36Z","lastTransitionTime":"2025-11-25T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.758660 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.758705 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.758720 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.758737 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.758751 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:36Z","lastTransitionTime":"2025-11-25T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.811563 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.811612 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:36 crc kubenswrapper[4731]: E1125 14:54:36.811731 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.811778 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:36 crc kubenswrapper[4731]: E1125 14:54:36.811866 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:36 crc kubenswrapper[4731]: E1125 14:54:36.811930 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.861575 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.861637 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.861647 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.861666 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.861677 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:36Z","lastTransitionTime":"2025-11-25T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.964380 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.964659 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.964773 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.964865 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:36 crc kubenswrapper[4731]: I1125 14:54:36.964933 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:36Z","lastTransitionTime":"2025-11-25T14:54:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.067904 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.068210 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.068287 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.068390 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.068458 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:37Z","lastTransitionTime":"2025-11-25T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.172461 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.173658 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.173695 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.173713 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.173726 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:37Z","lastTransitionTime":"2025-11-25T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.200000 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.225713 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.240294 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.253093 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.264855 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.276508 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.276564 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.276574 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.276593 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.276603 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:37Z","lastTransitionTime":"2025-11-25T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.278498 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.292142 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.305495 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.319541 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.337538 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.352025 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.366984 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.378860 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.378917 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.378930 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.378951 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.378968 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:37Z","lastTransitionTime":"2025-11-25T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.390935 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.409459 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.423828 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.440724 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.482206 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.482268 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.482285 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.482312 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.482334 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:37Z","lastTransitionTime":"2025-11-25T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.585066 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.585110 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.585122 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.585137 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.585147 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:37Z","lastTransitionTime":"2025-11-25T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.696122 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.696165 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.696177 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.696195 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.696207 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:37Z","lastTransitionTime":"2025-11-25T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.799322 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.799384 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.799397 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.799418 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.799435 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:37Z","lastTransitionTime":"2025-11-25T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.902562 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.902610 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.902624 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.902641 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:37 crc kubenswrapper[4731]: I1125 14:54:37.902651 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:37Z","lastTransitionTime":"2025-11-25T14:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.006384 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.006526 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.006551 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.006584 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.006606 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:38Z","lastTransitionTime":"2025-11-25T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.109759 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.109832 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.109845 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.109868 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.109886 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:38Z","lastTransitionTime":"2025-11-25T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.212423 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.212498 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.212511 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.212535 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.212549 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:38Z","lastTransitionTime":"2025-11-25T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.315596 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.315649 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.315659 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.315677 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.315687 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:38Z","lastTransitionTime":"2025-11-25T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.418769 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.418871 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.418886 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.418913 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.418927 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:38Z","lastTransitionTime":"2025-11-25T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.521943 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.522019 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.522043 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.522079 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.522101 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:38Z","lastTransitionTime":"2025-11-25T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.624856 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.624923 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.624936 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.624962 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.624990 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:38Z","lastTransitionTime":"2025-11-25T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.727565 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.727604 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.727614 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.727630 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.727640 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:38Z","lastTransitionTime":"2025-11-25T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.811138 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.811138 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:38 crc kubenswrapper[4731]: E1125 14:54:38.811760 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.811173 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:38 crc kubenswrapper[4731]: E1125 14:54:38.811902 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:38 crc kubenswrapper[4731]: E1125 14:54:38.812238 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.831434 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.831495 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.831515 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.831539 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.831554 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:38Z","lastTransitionTime":"2025-11-25T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.934349 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.934392 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.934402 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.934416 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:38 crc kubenswrapper[4731]: I1125 14:54:38.934425 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:38Z","lastTransitionTime":"2025-11-25T14:54:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.037172 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.037262 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.037272 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.037290 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.037301 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:39Z","lastTransitionTime":"2025-11-25T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.139754 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.139798 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.139806 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.139842 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.139855 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:39Z","lastTransitionTime":"2025-11-25T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.242135 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.242193 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.242203 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.242221 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.242229 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:39Z","lastTransitionTime":"2025-11-25T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.345108 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.345165 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.345178 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.345197 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.345208 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:39Z","lastTransitionTime":"2025-11-25T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.447405 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.447440 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.447448 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.447462 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.447471 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:39Z","lastTransitionTime":"2025-11-25T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.550462 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.550516 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.550528 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.550548 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.550560 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:39Z","lastTransitionTime":"2025-11-25T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.645986 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb"] Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.646698 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.650561 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.650561 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.652682 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.652751 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.652761 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.652784 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.652798 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:39Z","lastTransitionTime":"2025-11-25T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.664553 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.677845 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.693785 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.709349 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.724028 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.738082 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.756345 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.756938 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.756969 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.756980 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.756996 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.757009 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:39Z","lastTransitionTime":"2025-11-25T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.772290 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.785095 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.801142 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.818318 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.832599 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.833956 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc2sw\" (UniqueName: \"kubernetes.io/projected/f7790405-0cce-4841-8f4f-328933f7be5e-kube-api-access-gc2sw\") pod \"ovnkube-control-plane-749d76644c-672jb\" (UID: \"f7790405-0cce-4841-8f4f-328933f7be5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.834001 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7790405-0cce-4841-8f4f-328933f7be5e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-672jb\" (UID: \"f7790405-0cce-4841-8f4f-328933f7be5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.834258 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7790405-0cce-4841-8f4f-328933f7be5e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-672jb\" (UID: \"f7790405-0cce-4841-8f4f-328933f7be5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.834335 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7790405-0cce-4841-8f4f-328933f7be5e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-672jb\" (UID: \"f7790405-0cce-4841-8f4f-328933f7be5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.847421 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.860231 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.860286 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.860296 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.860314 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.860327 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:39Z","lastTransitionTime":"2025-11-25T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.864009 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.886918 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:39Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.935611 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc2sw\" (UniqueName: \"kubernetes.io/projected/f7790405-0cce-4841-8f4f-328933f7be5e-kube-api-access-gc2sw\") pod \"ovnkube-control-plane-749d76644c-672jb\" (UID: \"f7790405-0cce-4841-8f4f-328933f7be5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.935673 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7790405-0cce-4841-8f4f-328933f7be5e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-672jb\" (UID: \"f7790405-0cce-4841-8f4f-328933f7be5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.935745 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7790405-0cce-4841-8f4f-328933f7be5e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-672jb\" (UID: \"f7790405-0cce-4841-8f4f-328933f7be5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.935773 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7790405-0cce-4841-8f4f-328933f7be5e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-672jb\" (UID: \"f7790405-0cce-4841-8f4f-328933f7be5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.936635 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f7790405-0cce-4841-8f4f-328933f7be5e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-672jb\" (UID: \"f7790405-0cce-4841-8f4f-328933f7be5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.936853 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f7790405-0cce-4841-8f4f-328933f7be5e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-672jb\" (UID: \"f7790405-0cce-4841-8f4f-328933f7be5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.940990 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f7790405-0cce-4841-8f4f-328933f7be5e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-672jb\" (UID: \"f7790405-0cce-4841-8f4f-328933f7be5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.955503 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc2sw\" (UniqueName: \"kubernetes.io/projected/f7790405-0cce-4841-8f4f-328933f7be5e-kube-api-access-gc2sw\") pod \"ovnkube-control-plane-749d76644c-672jb\" (UID: \"f7790405-0cce-4841-8f4f-328933f7be5e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.963162 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.963912 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.963969 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.964003 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.964018 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:39Z","lastTransitionTime":"2025-11-25T14:54:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:39 crc kubenswrapper[4731]: I1125 14:54:39.972866 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.067083 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.067138 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.067153 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.067173 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.067186 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:40Z","lastTransitionTime":"2025-11-25T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.169114 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.169148 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.169157 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.169173 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.169184 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:40Z","lastTransitionTime":"2025-11-25T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.210186 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" event={"ID":"f7790405-0cce-4841-8f4f-328933f7be5e","Type":"ContainerStarted","Data":"8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014"} Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.210245 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" event={"ID":"f7790405-0cce-4841-8f4f-328933f7be5e","Type":"ContainerStarted","Data":"60881f12d413ebd15369bf05e8cf73fdc373c2f4d3526e5258ee426c108038e7"} Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.211780 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovnkube-controller/0.log" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.214078 4731 generic.go:334] "Generic (PLEG): container finished" podID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerID="a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560" exitCode=1 Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.214128 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerDied","Data":"a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560"} Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.215323 4731 scope.go:117] "RemoveContainer" containerID="a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.227861 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.238484 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.254721 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.269644 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.272140 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.272177 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.272189 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.272208 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.272222 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:40Z","lastTransitionTime":"2025-11-25T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.280213 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.295582 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.311773 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.332247 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.349605 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.367358 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.376176 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.376215 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.376225 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.376240 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.376251 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:40Z","lastTransitionTime":"2025-11-25T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.396113 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.412909 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.427598 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.441176 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.466432 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:54:38.885269 6049 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:54:38.885354 6049 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:54:38.885746 6049 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:54:38.885763 6049 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:54:38.885806 6049 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:54:38.885832 6049 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:54:38.886295 6049 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:54:38.886323 6049 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:54:38.886348 6049 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1125 14:54:38.886361 6049 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:54:38.886371 6049 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 14:54:38.886381 6049 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:54:38.886392 6049 factory.go:656] Stopping watch factory\\\\nI1125 14:54:38.886401 6049 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:54:38.886411 6049 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:54:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:40Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.478960 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.479001 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.479013 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.479031 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.479043 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:40Z","lastTransitionTime":"2025-11-25T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.581397 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.581429 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.581437 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.581451 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.581461 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:40Z","lastTransitionTime":"2025-11-25T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.683563 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.683597 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.683605 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.683620 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.683629 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:40Z","lastTransitionTime":"2025-11-25T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.750380 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.750536 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:40 crc kubenswrapper[4731]: E1125 14:54:40.750603 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:54:56.750572901 +0000 UTC m=+55.497707331 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:54:40 crc kubenswrapper[4731]: E1125 14:54:40.750665 4731 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.750707 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:40 crc kubenswrapper[4731]: E1125 14:54:40.750744 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:56.750709854 +0000 UTC m=+55.497844484 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:54:40 crc kubenswrapper[4731]: E1125 14:54:40.750853 4731 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:54:40 crc kubenswrapper[4731]: E1125 14:54:40.750894 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:56.750885488 +0000 UTC m=+55.498019918 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.786188 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.786239 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.786259 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.786281 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.786302 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:40Z","lastTransitionTime":"2025-11-25T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.810652 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.810711 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.810734 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:40 crc kubenswrapper[4731]: E1125 14:54:40.810834 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:40 crc kubenswrapper[4731]: E1125 14:54:40.810948 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:40 crc kubenswrapper[4731]: E1125 14:54:40.811051 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.851973 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.852040 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:40 crc kubenswrapper[4731]: E1125 14:54:40.852193 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:54:40 crc kubenswrapper[4731]: E1125 14:54:40.852209 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:54:40 crc kubenswrapper[4731]: E1125 14:54:40.852221 4731 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:40 crc kubenswrapper[4731]: E1125 14:54:40.852273 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:56.852253315 +0000 UTC m=+55.599387965 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:40 crc kubenswrapper[4731]: E1125 14:54:40.852610 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:54:40 crc kubenswrapper[4731]: E1125 14:54:40.852709 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:54:40 crc kubenswrapper[4731]: E1125 14:54:40.852789 4731 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:40 crc kubenswrapper[4731]: E1125 14:54:40.852924 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:56.852903832 +0000 UTC m=+55.600038262 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.888859 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.888913 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.888930 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.888952 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.888967 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:40Z","lastTransitionTime":"2025-11-25T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.991758 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.991794 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.991802 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.991838 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:40 crc kubenswrapper[4731]: I1125 14:54:40.991848 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:40Z","lastTransitionTime":"2025-11-25T14:54:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.095043 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.095086 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.095097 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.095152 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.095166 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:41Z","lastTransitionTime":"2025-11-25T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.124909 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-7xvpg"] Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.125444 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:41 crc kubenswrapper[4731]: E1125 14:54:41.125513 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.141462 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.159665 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.173099 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.183961 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.197758 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.197804 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.197830 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.197848 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.197857 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:41Z","lastTransitionTime":"2025-11-25T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.200758 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.217007 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.220702 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovnkube-controller/0.log" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.224200 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerStarted","Data":"88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706"} Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.224669 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.227032 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" event={"ID":"f7790405-0cce-4841-8f4f-328933f7be5e","Type":"ContainerStarted","Data":"df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9"} Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.235725 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.250701 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.256469 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs\") pod \"network-metrics-daemon-7xvpg\" (UID: \"12559634-427c-4ce7-b9e8-d2dd37299642\") " pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.256514 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrtbw\" (UniqueName: \"kubernetes.io/projected/12559634-427c-4ce7-b9e8-d2dd37299642-kube-api-access-wrtbw\") pod \"network-metrics-daemon-7xvpg\" (UID: \"12559634-427c-4ce7-b9e8-d2dd37299642\") " pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.266758 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.280045 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.294502 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.300028 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.300082 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.300092 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.300109 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.300119 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:41Z","lastTransitionTime":"2025-11-25T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.308474 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.331562 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:54:38.885269 6049 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:54:38.885354 6049 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:54:38.885746 6049 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:54:38.885763 6049 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:54:38.885806 6049 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:54:38.885832 6049 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:54:38.886295 6049 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:54:38.886323 6049 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:54:38.886348 6049 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1125 14:54:38.886361 6049 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:54:38.886371 6049 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 14:54:38.886381 6049 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:54:38.886392 6049 factory.go:656] Stopping watch factory\\\\nI1125 14:54:38.886401 6049 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:54:38.886411 6049 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:54:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.347230 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.357589 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs\") pod \"network-metrics-daemon-7xvpg\" (UID: \"12559634-427c-4ce7-b9e8-d2dd37299642\") " pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.357724 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrtbw\" (UniqueName: \"kubernetes.io/projected/12559634-427c-4ce7-b9e8-d2dd37299642-kube-api-access-wrtbw\") pod \"network-metrics-daemon-7xvpg\" (UID: \"12559634-427c-4ce7-b9e8-d2dd37299642\") " pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:41 crc kubenswrapper[4731]: E1125 14:54:41.357681 4731 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:54:41 crc kubenswrapper[4731]: E1125 14:54:41.357983 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs podName:12559634-427c-4ce7-b9e8-d2dd37299642 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:41.857915469 +0000 UTC m=+40.605049899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs") pod "network-metrics-daemon-7xvpg" (UID: "12559634-427c-4ce7-b9e8-d2dd37299642") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.365183 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.380013 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrtbw\" (UniqueName: \"kubernetes.io/projected/12559634-427c-4ce7-b9e8-d2dd37299642-kube-api-access-wrtbw\") pod \"network-metrics-daemon-7xvpg\" (UID: \"12559634-427c-4ce7-b9e8-d2dd37299642\") " pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.382756 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.397564 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.403254 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.403312 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.403335 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.403356 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.403386 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:41Z","lastTransitionTime":"2025-11-25T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.418557 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:54:38.885269 6049 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:54:38.885354 6049 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:54:38.885746 6049 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:54:38.885763 6049 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:54:38.885806 6049 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:54:38.885832 6049 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:54:38.886295 6049 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:54:38.886323 6049 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:54:38.886348 6049 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1125 14:54:38.886361 6049 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:54:38.886371 6049 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 14:54:38.886381 6049 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:54:38.886392 6049 factory.go:656] Stopping watch factory\\\\nI1125 14:54:38.886401 6049 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:54:38.886411 6049 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:54:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.432976 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.443512 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.459482 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.474448 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.488286 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.499288 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.505605 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.505650 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.505660 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.505677 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.505686 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:41Z","lastTransitionTime":"2025-11-25T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.511504 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.525533 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.537545 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.550199 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.563273 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.578507 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.589719 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.601597 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:41Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.608080 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.608126 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.608136 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.608152 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.608161 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:41Z","lastTransitionTime":"2025-11-25T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.710215 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.710261 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.710273 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.710291 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.710305 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:41Z","lastTransitionTime":"2025-11-25T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.812549 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.812608 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.812620 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.812639 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.812652 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:41Z","lastTransitionTime":"2025-11-25T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.863395 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs\") pod \"network-metrics-daemon-7xvpg\" (UID: \"12559634-427c-4ce7-b9e8-d2dd37299642\") " pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:41 crc kubenswrapper[4731]: E1125 14:54:41.863626 4731 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:54:41 crc kubenswrapper[4731]: E1125 14:54:41.863719 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs podName:12559634-427c-4ce7-b9e8-d2dd37299642 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:42.863695986 +0000 UTC m=+41.610830416 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs") pod "network-metrics-daemon-7xvpg" (UID: "12559634-427c-4ce7-b9e8-d2dd37299642") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.915912 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.915949 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.915960 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.915978 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:41 crc kubenswrapper[4731]: I1125 14:54:41.915988 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:41Z","lastTransitionTime":"2025-11-25T14:54:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.018644 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.018688 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.018699 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.018714 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.018726 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:42Z","lastTransitionTime":"2025-11-25T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.121046 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.121089 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.121101 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.121120 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.121134 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:42Z","lastTransitionTime":"2025-11-25T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.224973 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.225042 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.225064 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.225093 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.225116 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:42Z","lastTransitionTime":"2025-11-25T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.232597 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovnkube-controller/1.log" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.233238 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovnkube-controller/0.log" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.236602 4731 generic.go:334] "Generic (PLEG): container finished" podID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerID="88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706" exitCode=1 Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.236652 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerDied","Data":"88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706"} Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.236734 4731 scope.go:117] "RemoveContainer" containerID="a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.237509 4731 scope.go:117] "RemoveContainer" containerID="88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706" Nov 25 14:54:42 crc kubenswrapper[4731]: E1125 14:54:42.237681 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.255343 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.271392 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.282474 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.297620 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.314094 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.327795 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.327889 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.327902 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.327923 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.327935 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:42Z","lastTransitionTime":"2025-11-25T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.332520 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.346336 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.364790 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.377564 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.390399 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.403931 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.416590 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.430237 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.430276 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.430286 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.430301 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.430313 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:42Z","lastTransitionTime":"2025-11-25T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.435092 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:54:38.885269 6049 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:54:38.885354 6049 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:54:38.885746 6049 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:54:38.885763 6049 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:54:38.885806 6049 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:54:38.885832 6049 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:54:38.886295 6049 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:54:38.886323 6049 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:54:38.886348 6049 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1125 14:54:38.886361 6049 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:54:38.886371 6049 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 14:54:38.886381 6049 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:54:38.886392 6049 factory.go:656] Stopping watch factory\\\\nI1125 14:54:38.886401 6049 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:54:38.886411 6049 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:54:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"ts:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"b21188fe-5483-4717-afe6-20a41a40b91a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.453929 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.469205 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.485343 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.532885 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.532937 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.532949 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.532968 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.532981 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:42Z","lastTransitionTime":"2025-11-25T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.636171 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.636214 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.636225 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.636244 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.636256 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:42Z","lastTransitionTime":"2025-11-25T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.739744 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.739792 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.739801 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.739836 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.739849 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:42Z","lastTransitionTime":"2025-11-25T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.811635 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.811711 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.811846 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.811977 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:42 crc kubenswrapper[4731]: E1125 14:54:42.811996 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:42 crc kubenswrapper[4731]: E1125 14:54:42.812197 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:54:42 crc kubenswrapper[4731]: E1125 14:54:42.812678 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:42 crc kubenswrapper[4731]: E1125 14:54:42.812789 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.813581 4731 scope.go:117] "RemoveContainer" containerID="6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.825035 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.842966 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.843019 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.843029 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.843048 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.843059 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:42Z","lastTransitionTime":"2025-11-25T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.843529 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.863609 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.874659 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs\") pod \"network-metrics-daemon-7xvpg\" (UID: \"12559634-427c-4ce7-b9e8-d2dd37299642\") " pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:42 crc kubenswrapper[4731]: E1125 14:54:42.874872 4731 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:54:42 crc kubenswrapper[4731]: E1125 14:54:42.875021 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs podName:12559634-427c-4ce7-b9e8-d2dd37299642 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:44.874988444 +0000 UTC m=+43.622122914 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs") pod "network-metrics-daemon-7xvpg" (UID: "12559634-427c-4ce7-b9e8-d2dd37299642") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.878268 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.892386 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.908488 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.919962 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.935583 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.945111 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.945152 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.945186 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.945203 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.945214 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:42Z","lastTransitionTime":"2025-11-25T14:54:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.952887 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.967209 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.980006 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:42 crc kubenswrapper[4731]: I1125 14:54:42.993043 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.015036 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:54:38.885269 6049 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:54:38.885354 6049 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:54:38.885746 6049 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:54:38.885763 6049 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:54:38.885806 6049 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:54:38.885832 6049 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:54:38.886295 6049 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:54:38.886323 6049 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:54:38.886348 6049 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1125 14:54:38.886361 6049 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:54:38.886371 6049 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 14:54:38.886381 6049 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:54:38.886392 6049 factory.go:656] Stopping watch factory\\\\nI1125 14:54:38.886401 6049 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:54:38.886411 6049 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:54:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"ts:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"b21188fe-5483-4717-afe6-20a41a40b91a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.033266 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.043526 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.047332 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.047383 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.047397 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.047417 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.047429 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:43Z","lastTransitionTime":"2025-11-25T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.058936 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.149561 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.149595 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.149606 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.149623 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.149636 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:43Z","lastTransitionTime":"2025-11-25T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.241266 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.242848 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874"} Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.243324 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.244593 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovnkube-controller/1.log" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.248078 4731 scope.go:117] "RemoveContainer" containerID="88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706" Nov 25 14:54:43 crc kubenswrapper[4731]: E1125 14:54:43.248248 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.251211 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.251243 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.251252 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.251266 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.251277 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:43Z","lastTransitionTime":"2025-11-25T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.257102 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.275717 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a1be84ca43d3212a3470ebe0e8eacb7daf2d22534f461450487245e3bdaf2560\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"message\\\":\\\"or.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1125 14:54:38.885269 6049 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1125 14:54:38.885354 6049 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1125 14:54:38.885746 6049 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1125 14:54:38.885763 6049 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1125 14:54:38.885806 6049 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1125 14:54:38.885832 6049 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1125 14:54:38.886295 6049 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1125 14:54:38.886323 6049 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1125 14:54:38.886348 6049 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1125 14:54:38.886361 6049 handler.go:208] Removed *v1.Node event handler 2\\\\nI1125 14:54:38.886371 6049 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1125 14:54:38.886381 6049 handler.go:208] Removed *v1.Node event handler 7\\\\nI1125 14:54:38.886392 6049 factory.go:656] Stopping watch factory\\\\nI1125 14:54:38.886401 6049 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1125 14:54:38.886411 6049 ovnkube.go:599] Stopped ovnkube\\\\nI1125 14:54:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"ts:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"b21188fe-5483-4717-afe6-20a41a40b91a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.289887 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.301017 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.316222 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.326157 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.335796 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.348019 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.354780 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.354844 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.354856 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.354873 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.354885 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:43Z","lastTransitionTime":"2025-11-25T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.361079 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.374004 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.385902 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.396054 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.406672 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.420439 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.435265 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.448260 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.457622 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.457661 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.457670 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.457691 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.457710 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:43Z","lastTransitionTime":"2025-11-25T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.461659 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.473917 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.487452 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.500954 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.516580 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.532528 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.545481 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.563359 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.563428 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.563685 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.563712 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.563726 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:43Z","lastTransitionTime":"2025-11-25T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.564005 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.579338 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.593433 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.606550 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.622268 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.648764 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"ts:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"b21188fe-5483-4717-afe6-20a41a40b91a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.666206 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.666247 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.666288 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.666304 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.666322 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.666619 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:43Z","lastTransitionTime":"2025-11-25T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.676830 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.707234 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.769332 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.769384 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.769394 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.769413 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.769423 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:43Z","lastTransitionTime":"2025-11-25T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.873218 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.873278 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.873292 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.873311 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.873560 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:43Z","lastTransitionTime":"2025-11-25T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.976279 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.976332 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.976345 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.976366 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:43 crc kubenswrapper[4731]: I1125 14:54:43.976381 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:43Z","lastTransitionTime":"2025-11-25T14:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.079959 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.080011 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.080022 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.080038 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.080050 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:44Z","lastTransitionTime":"2025-11-25T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.182753 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.182809 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.182844 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.182871 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.182885 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:44Z","lastTransitionTime":"2025-11-25T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.286204 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.286275 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.286292 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.286321 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.286346 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:44Z","lastTransitionTime":"2025-11-25T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.390339 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.390403 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.390417 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.390439 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.390457 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:44Z","lastTransitionTime":"2025-11-25T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.493449 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.493506 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.493525 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.493546 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.493559 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:44Z","lastTransitionTime":"2025-11-25T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.595945 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.596004 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.596018 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.596040 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.596052 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:44Z","lastTransitionTime":"2025-11-25T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.699348 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.699398 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.699408 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.699430 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.699442 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:44Z","lastTransitionTime":"2025-11-25T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.802522 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.802606 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.802624 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.802649 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.802665 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:44Z","lastTransitionTime":"2025-11-25T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.810979 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.811121 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.811365 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:44 crc kubenswrapper[4731]: E1125 14:54:44.811365 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.811429 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:44 crc kubenswrapper[4731]: E1125 14:54:44.811578 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:44 crc kubenswrapper[4731]: E1125 14:54:44.811655 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:54:44 crc kubenswrapper[4731]: E1125 14:54:44.811683 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.894737 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs\") pod \"network-metrics-daemon-7xvpg\" (UID: \"12559634-427c-4ce7-b9e8-d2dd37299642\") " pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:44 crc kubenswrapper[4731]: E1125 14:54:44.894960 4731 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:54:44 crc kubenswrapper[4731]: E1125 14:54:44.895066 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs podName:12559634-427c-4ce7-b9e8-d2dd37299642 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:48.895046843 +0000 UTC m=+47.642181273 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs") pod "network-metrics-daemon-7xvpg" (UID: "12559634-427c-4ce7-b9e8-d2dd37299642") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.906833 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.906878 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.906889 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.906911 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:44 crc kubenswrapper[4731]: I1125 14:54:44.906922 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:44Z","lastTransitionTime":"2025-11-25T14:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.009773 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.009846 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.009861 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.009883 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.009895 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:45Z","lastTransitionTime":"2025-11-25T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.112961 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.113038 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.113055 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.113089 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.113105 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:45Z","lastTransitionTime":"2025-11-25T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.217040 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.217442 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.217521 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.217616 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.217739 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:45Z","lastTransitionTime":"2025-11-25T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.324272 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.325066 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.325165 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.325191 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.325205 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:45Z","lastTransitionTime":"2025-11-25T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.429095 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.429165 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.429173 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.429189 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.429199 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:45Z","lastTransitionTime":"2025-11-25T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.533954 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.534007 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.534023 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.534046 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.534067 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:45Z","lastTransitionTime":"2025-11-25T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.637779 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.637858 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.637872 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.637893 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.637932 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:45Z","lastTransitionTime":"2025-11-25T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.741704 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.741767 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.741783 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.741805 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.741852 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:45Z","lastTransitionTime":"2025-11-25T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.845334 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.845486 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.845514 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.845554 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.845586 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:45Z","lastTransitionTime":"2025-11-25T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.949561 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.949623 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.949637 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.949658 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:45 crc kubenswrapper[4731]: I1125 14:54:45.949670 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:45Z","lastTransitionTime":"2025-11-25T14:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.052926 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.052992 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.053004 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.053026 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.053042 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:46Z","lastTransitionTime":"2025-11-25T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.054847 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.054907 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.054918 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.054940 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.054956 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:46Z","lastTransitionTime":"2025-11-25T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:46 crc kubenswrapper[4731]: E1125 14:54:46.075766 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.081472 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.081534 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.081548 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.081569 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.081583 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:46Z","lastTransitionTime":"2025-11-25T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:46 crc kubenswrapper[4731]: E1125 14:54:46.102181 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.107086 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.107135 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.107145 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.107162 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.107177 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:46Z","lastTransitionTime":"2025-11-25T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:46 crc kubenswrapper[4731]: E1125 14:54:46.120108 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.125044 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.125128 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.125148 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.125173 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.125191 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:46Z","lastTransitionTime":"2025-11-25T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:46 crc kubenswrapper[4731]: E1125 14:54:46.141884 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.146521 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.146594 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.146610 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.146639 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.146657 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:46Z","lastTransitionTime":"2025-11-25T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:46 crc kubenswrapper[4731]: E1125 14:54:46.162859 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:46Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:46 crc kubenswrapper[4731]: E1125 14:54:46.163046 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.165402 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.165459 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.165477 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.165507 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.165522 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:46Z","lastTransitionTime":"2025-11-25T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.267845 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.267893 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.267904 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.267925 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.267949 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:46Z","lastTransitionTime":"2025-11-25T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.370853 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.370938 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.370962 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.370994 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.371016 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:46Z","lastTransitionTime":"2025-11-25T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.474029 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.474094 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.474110 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.474137 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.474154 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:46Z","lastTransitionTime":"2025-11-25T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.577148 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.577201 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.577210 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.577228 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.577242 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:46Z","lastTransitionTime":"2025-11-25T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.679880 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.679954 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.679980 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.680013 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.680039 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:46Z","lastTransitionTime":"2025-11-25T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.787927 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.788692 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.788731 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.788762 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.788780 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:46Z","lastTransitionTime":"2025-11-25T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.810625 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.810689 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.810758 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:46 crc kubenswrapper[4731]: E1125 14:54:46.810940 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.810976 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:46 crc kubenswrapper[4731]: E1125 14:54:46.811095 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:46 crc kubenswrapper[4731]: E1125 14:54:46.811303 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:54:46 crc kubenswrapper[4731]: E1125 14:54:46.811478 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.892395 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.892452 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.892469 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.892495 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.892514 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:46Z","lastTransitionTime":"2025-11-25T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.996061 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.996122 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.996141 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.996168 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:46 crc kubenswrapper[4731]: I1125 14:54:46.996187 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:46Z","lastTransitionTime":"2025-11-25T14:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.098989 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.099030 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.099039 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.099055 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.099066 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:47Z","lastTransitionTime":"2025-11-25T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.202018 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.202078 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.202095 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.202120 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.202138 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:47Z","lastTransitionTime":"2025-11-25T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.306131 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.306201 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.306223 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.306259 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.306282 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:47Z","lastTransitionTime":"2025-11-25T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.410560 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.410624 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.410641 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.410669 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.410692 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:47Z","lastTransitionTime":"2025-11-25T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.514635 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.514706 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.514726 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.514754 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.514773 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:47Z","lastTransitionTime":"2025-11-25T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.617760 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.617864 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.617884 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.617910 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.617924 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:47Z","lastTransitionTime":"2025-11-25T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.721956 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.722015 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.722029 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.722053 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.722070 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:47Z","lastTransitionTime":"2025-11-25T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.825630 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.825703 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.825731 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.825765 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.825792 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:47Z","lastTransitionTime":"2025-11-25T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.929218 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.929270 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.929295 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.929317 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:47 crc kubenswrapper[4731]: I1125 14:54:47.929331 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:47Z","lastTransitionTime":"2025-11-25T14:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.032116 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.032170 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.032184 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.032203 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.032216 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:48Z","lastTransitionTime":"2025-11-25T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.136127 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.136175 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.136188 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.136212 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.136227 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:48Z","lastTransitionTime":"2025-11-25T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.238659 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.238702 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.238711 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.238727 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.238765 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:48Z","lastTransitionTime":"2025-11-25T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.341508 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.341601 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.341627 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.341660 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.341684 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:48Z","lastTransitionTime":"2025-11-25T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.445310 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.445361 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.445377 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.445400 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.445415 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:48Z","lastTransitionTime":"2025-11-25T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.547761 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.547809 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.547834 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.547855 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.547864 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:48Z","lastTransitionTime":"2025-11-25T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.650417 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.650456 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.650466 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.650486 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.650499 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:48Z","lastTransitionTime":"2025-11-25T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.752228 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.752284 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.752297 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.752316 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.752329 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:48Z","lastTransitionTime":"2025-11-25T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.811409 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.811489 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.811443 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.811689 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:48 crc kubenswrapper[4731]: E1125 14:54:48.811840 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:48 crc kubenswrapper[4731]: E1125 14:54:48.811921 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:48 crc kubenswrapper[4731]: E1125 14:54:48.811949 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:48 crc kubenswrapper[4731]: E1125 14:54:48.811979 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.854732 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.854789 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.854797 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.854837 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.854847 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:48Z","lastTransitionTime":"2025-11-25T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.933488 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs\") pod \"network-metrics-daemon-7xvpg\" (UID: \"12559634-427c-4ce7-b9e8-d2dd37299642\") " pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:48 crc kubenswrapper[4731]: E1125 14:54:48.933649 4731 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:54:48 crc kubenswrapper[4731]: E1125 14:54:48.933721 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs podName:12559634-427c-4ce7-b9e8-d2dd37299642 nodeName:}" failed. No retries permitted until 2025-11-25 14:54:56.933701207 +0000 UTC m=+55.680835637 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs") pod "network-metrics-daemon-7xvpg" (UID: "12559634-427c-4ce7-b9e8-d2dd37299642") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.958724 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.958789 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.958799 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.958852 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:48 crc kubenswrapper[4731]: I1125 14:54:48.958868 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:48Z","lastTransitionTime":"2025-11-25T14:54:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.064874 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.064974 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.065029 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.065070 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.065107 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:49Z","lastTransitionTime":"2025-11-25T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.168307 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.168350 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.168360 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.168375 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.168388 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:49Z","lastTransitionTime":"2025-11-25T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.270023 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.270060 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.270071 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.270087 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.270122 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:49Z","lastTransitionTime":"2025-11-25T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.373656 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.373707 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.373716 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.373736 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.373746 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:49Z","lastTransitionTime":"2025-11-25T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.476772 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.476849 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.476860 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.476877 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.476901 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:49Z","lastTransitionTime":"2025-11-25T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.579114 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.579188 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.579201 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.579223 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.579239 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:49Z","lastTransitionTime":"2025-11-25T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.682432 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.682490 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.682505 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.682521 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.682533 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:49Z","lastTransitionTime":"2025-11-25T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.785735 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.785809 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.785844 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.785869 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.785882 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:49Z","lastTransitionTime":"2025-11-25T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.888700 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.888751 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.888763 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.888780 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.888791 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:49Z","lastTransitionTime":"2025-11-25T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.991484 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.991540 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.991559 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.991582 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:49 crc kubenswrapper[4731]: I1125 14:54:49.991593 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:49Z","lastTransitionTime":"2025-11-25T14:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.094332 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.094379 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.094391 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.094412 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.094424 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:50Z","lastTransitionTime":"2025-11-25T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.197182 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.197224 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.197235 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.197252 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.197264 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:50Z","lastTransitionTime":"2025-11-25T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.299393 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.299451 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.299462 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.299477 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.299487 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:50Z","lastTransitionTime":"2025-11-25T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.402480 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.402545 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.402553 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.402571 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.402582 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:50Z","lastTransitionTime":"2025-11-25T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.505538 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.505583 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.505596 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.505615 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.505632 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:50Z","lastTransitionTime":"2025-11-25T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.609065 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.609108 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.609120 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.609136 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.609150 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:50Z","lastTransitionTime":"2025-11-25T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.712542 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.712610 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.712639 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.712672 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.712697 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:50Z","lastTransitionTime":"2025-11-25T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.811636 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.811719 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.811781 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.811663 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:50 crc kubenswrapper[4731]: E1125 14:54:50.811948 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:50 crc kubenswrapper[4731]: E1125 14:54:50.812149 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:54:50 crc kubenswrapper[4731]: E1125 14:54:50.812281 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:50 crc kubenswrapper[4731]: E1125 14:54:50.812427 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.815232 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.815268 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.815281 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.815297 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.815310 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:50Z","lastTransitionTime":"2025-11-25T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.918984 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.919044 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.919062 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.919089 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:50 crc kubenswrapper[4731]: I1125 14:54:50.919108 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:50Z","lastTransitionTime":"2025-11-25T14:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.022395 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.022463 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.022485 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.022518 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.022542 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:51Z","lastTransitionTime":"2025-11-25T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.125870 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.125938 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.125951 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.125971 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.125985 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:51Z","lastTransitionTime":"2025-11-25T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.229004 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.229056 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.229067 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.229085 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.229098 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:51Z","lastTransitionTime":"2025-11-25T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.333177 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.333227 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.333243 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.333275 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.333296 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:51Z","lastTransitionTime":"2025-11-25T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.436435 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.436496 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.436514 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.436539 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.436557 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:51Z","lastTransitionTime":"2025-11-25T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.540014 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.540070 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.540082 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.540100 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.540114 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:51Z","lastTransitionTime":"2025-11-25T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.643190 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.643266 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.643288 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.643315 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.643347 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:51Z","lastTransitionTime":"2025-11-25T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.746809 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.746885 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.746895 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.746914 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.746926 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:51Z","lastTransitionTime":"2025-11-25T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.850156 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.850218 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.850235 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.850259 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.850275 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:51Z","lastTransitionTime":"2025-11-25T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.953365 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.953437 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.953447 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.953466 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:51 crc kubenswrapper[4731]: I1125 14:54:51.953479 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:51Z","lastTransitionTime":"2025-11-25T14:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.056225 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.056262 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.056270 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.056285 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.056296 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:52Z","lastTransitionTime":"2025-11-25T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.159520 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.159579 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.159590 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.159610 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.159624 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:52Z","lastTransitionTime":"2025-11-25T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.262405 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.262454 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.262466 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.262487 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.262501 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:52Z","lastTransitionTime":"2025-11-25T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.366225 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.366672 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.367192 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.367597 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.368034 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:52Z","lastTransitionTime":"2025-11-25T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.472069 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.472142 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.472167 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.472199 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.472221 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:52Z","lastTransitionTime":"2025-11-25T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.575780 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.575870 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.575891 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.575916 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.575934 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:52Z","lastTransitionTime":"2025-11-25T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.679067 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.679134 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.679152 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.679202 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.679220 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:52Z","lastTransitionTime":"2025-11-25T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.782122 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.782158 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.782167 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.782182 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.782192 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:52Z","lastTransitionTime":"2025-11-25T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.810833 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.810893 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.810926 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.810838 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:52 crc kubenswrapper[4731]: E1125 14:54:52.811047 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:52 crc kubenswrapper[4731]: E1125 14:54:52.811243 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:52 crc kubenswrapper[4731]: E1125 14:54:52.811322 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:54:52 crc kubenswrapper[4731]: E1125 14:54:52.811483 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.824796 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.848690 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"ts:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"b21188fe-5483-4717-afe6-20a41a40b91a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.864659 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.875350 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.885414 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.885473 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.885483 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.885499 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.885508 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:52Z","lastTransitionTime":"2025-11-25T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.890068 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.900730 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.916108 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.931363 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.948030 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.964076 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.980118 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.988431 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.988475 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.988484 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.988500 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.988511 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:52Z","lastTransitionTime":"2025-11-25T14:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:52 crc kubenswrapper[4731]: I1125 14:54:52.993167 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.006446 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:53Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.017923 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:53Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.029457 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:53Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.041556 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:53Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.091563 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.091595 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.091604 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.091619 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.091629 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:53Z","lastTransitionTime":"2025-11-25T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.194323 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.194356 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.194364 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.194380 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.194390 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:53Z","lastTransitionTime":"2025-11-25T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.297207 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.297243 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.297252 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.297266 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.297274 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:53Z","lastTransitionTime":"2025-11-25T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.400917 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.400979 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.400994 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.401016 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.401031 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:53Z","lastTransitionTime":"2025-11-25T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.503519 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.503564 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.503572 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.503586 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.503599 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:53Z","lastTransitionTime":"2025-11-25T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.606580 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.606630 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.606643 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.606666 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.606678 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:53Z","lastTransitionTime":"2025-11-25T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.709602 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.709678 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.709689 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.709707 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.709721 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:53Z","lastTransitionTime":"2025-11-25T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.812460 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.812507 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.812522 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.812541 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.812557 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:53Z","lastTransitionTime":"2025-11-25T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.915905 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.915959 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.915972 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.915991 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:53 crc kubenswrapper[4731]: I1125 14:54:53.916003 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:53Z","lastTransitionTime":"2025-11-25T14:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.018854 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.018900 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.018910 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.018932 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.018945 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:54Z","lastTransitionTime":"2025-11-25T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.121099 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.121147 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.121160 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.121184 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.121197 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:54Z","lastTransitionTime":"2025-11-25T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.223738 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.223784 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.223795 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.223837 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.223851 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:54Z","lastTransitionTime":"2025-11-25T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.326387 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.326422 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.326433 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.326448 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.326476 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:54Z","lastTransitionTime":"2025-11-25T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.429220 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.429495 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.429580 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.429662 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.429781 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:54Z","lastTransitionTime":"2025-11-25T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.533210 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.533260 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.533276 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.533298 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.533313 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:54Z","lastTransitionTime":"2025-11-25T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.635984 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.636013 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.636023 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.636037 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.636046 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:54Z","lastTransitionTime":"2025-11-25T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.738977 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.739027 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.739042 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.739061 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.739073 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:54Z","lastTransitionTime":"2025-11-25T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.810977 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:54 crc kubenswrapper[4731]: E1125 14:54:54.811118 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.811190 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:54 crc kubenswrapper[4731]: E1125 14:54:54.811237 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.811566 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:54 crc kubenswrapper[4731]: E1125 14:54:54.811634 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.811846 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:54 crc kubenswrapper[4731]: E1125 14:54:54.812048 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.841572 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.841613 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.841622 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.841637 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.841649 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:54Z","lastTransitionTime":"2025-11-25T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.944419 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.944715 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.944869 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.944993 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:54 crc kubenswrapper[4731]: I1125 14:54:54.945075 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:54Z","lastTransitionTime":"2025-11-25T14:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.047715 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.047751 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.047761 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.047775 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.047785 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:55Z","lastTransitionTime":"2025-11-25T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.064184 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.075083 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.077712 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.091932 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.104448 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.116520 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.128649 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.141439 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.151242 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.151548 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.151580 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.151592 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.151608 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.151620 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:55Z","lastTransitionTime":"2025-11-25T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.164852 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.178217 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.190132 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.200903 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.214869 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.225859 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.236654 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.254003 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.254050 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.254064 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.254084 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.254095 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:55Z","lastTransitionTime":"2025-11-25T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.256064 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"ts:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"b21188fe-5483-4717-afe6-20a41a40b91a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.269570 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:55Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.357278 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.357326 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.357335 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.357352 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.357362 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:55Z","lastTransitionTime":"2025-11-25T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.460416 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.460457 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.460470 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.460486 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.460499 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:55Z","lastTransitionTime":"2025-11-25T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.563147 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.563416 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.563596 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.563764 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.563869 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:55Z","lastTransitionTime":"2025-11-25T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.666590 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.667155 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.667436 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.667635 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.667799 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:55Z","lastTransitionTime":"2025-11-25T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.769721 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.770001 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.770070 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.770137 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.770209 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:55Z","lastTransitionTime":"2025-11-25T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.872667 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.872704 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.872718 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.872736 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.872747 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:55Z","lastTransitionTime":"2025-11-25T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.975769 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.975849 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.975862 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.975880 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:55 crc kubenswrapper[4731]: I1125 14:54:55.975893 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:55Z","lastTransitionTime":"2025-11-25T14:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.078152 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.078371 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.078465 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.078536 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.078600 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:56Z","lastTransitionTime":"2025-11-25T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.182041 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.182350 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.182420 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.182485 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.182599 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:56Z","lastTransitionTime":"2025-11-25T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.284720 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.284772 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.284782 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.284797 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.284809 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:56Z","lastTransitionTime":"2025-11-25T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.293943 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.293978 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.293988 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.294011 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.294024 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:56Z","lastTransitionTime":"2025-11-25T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.349037 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.354163 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.354217 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.354231 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.354250 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.354266 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:56Z","lastTransitionTime":"2025-11-25T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.368490 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.373209 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.373247 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.373259 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.373276 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.373288 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:56Z","lastTransitionTime":"2025-11-25T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.385903 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.390128 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.390158 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.390167 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.390182 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.390191 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:56Z","lastTransitionTime":"2025-11-25T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.402312 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.406419 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.406471 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.406484 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.406504 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.406518 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:56Z","lastTransitionTime":"2025-11-25T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.422417 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:56Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.422607 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.424372 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.424479 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.424498 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.424520 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.424535 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:56Z","lastTransitionTime":"2025-11-25T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.527316 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.527376 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.527390 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.527410 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.527424 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:56Z","lastTransitionTime":"2025-11-25T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.630441 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.630483 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.630495 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.630511 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.630521 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:56Z","lastTransitionTime":"2025-11-25T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.733163 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.733196 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.733221 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.733238 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.733248 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:56Z","lastTransitionTime":"2025-11-25T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.753695 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.753915 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.753891911 +0000 UTC m=+87.501026351 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.753975 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.754010 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.754097 4731 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.754123 4731 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.754143 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.754135497 +0000 UTC m=+87.501269927 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.754164 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.754151127 +0000 UTC m=+87.501285557 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.811467 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.811563 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.811604 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.811569 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.811661 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.811721 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.811834 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.812006 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.836010 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.836274 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.836374 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.836467 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.836541 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:56Z","lastTransitionTime":"2025-11-25T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.855708 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.855765 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.855939 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.855979 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.855993 4731 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.856004 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.856028 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.856043 4731 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.856055 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.856037907 +0000 UTC m=+87.603172337 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.856103 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:55:28.856085958 +0000 UTC m=+87.603220398 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.939159 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.939204 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.939217 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.939239 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.939276 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:56Z","lastTransitionTime":"2025-11-25T14:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:56 crc kubenswrapper[4731]: I1125 14:54:56.956874 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs\") pod \"network-metrics-daemon-7xvpg\" (UID: \"12559634-427c-4ce7-b9e8-d2dd37299642\") " pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.957104 4731 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:54:56 crc kubenswrapper[4731]: E1125 14:54:56.957308 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs podName:12559634-427c-4ce7-b9e8-d2dd37299642 nodeName:}" failed. No retries permitted until 2025-11-25 14:55:12.95724692 +0000 UTC m=+71.704381530 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs") pod "network-metrics-daemon-7xvpg" (UID: "12559634-427c-4ce7-b9e8-d2dd37299642") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.042553 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.042598 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.042614 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.042642 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.042662 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:57Z","lastTransitionTime":"2025-11-25T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.144993 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.145067 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.145077 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.145093 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.145104 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:57Z","lastTransitionTime":"2025-11-25T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.248099 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.248473 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.248602 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.248744 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.248912 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:57Z","lastTransitionTime":"2025-11-25T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.351915 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.351950 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.351959 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.351975 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.351985 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:57Z","lastTransitionTime":"2025-11-25T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.455238 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.455280 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.455290 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.455307 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.455317 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:57Z","lastTransitionTime":"2025-11-25T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.559224 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.559269 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.559278 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.559294 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.559305 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:57Z","lastTransitionTime":"2025-11-25T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.661601 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.661656 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.661671 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.661692 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.661705 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:57Z","lastTransitionTime":"2025-11-25T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.764892 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.765421 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.765524 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.765604 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.765683 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:57Z","lastTransitionTime":"2025-11-25T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.868103 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.868136 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.868147 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.868167 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.868178 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:57Z","lastTransitionTime":"2025-11-25T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.970982 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.971244 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.971345 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.971450 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:57 crc kubenswrapper[4731]: I1125 14:54:57.971525 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:57Z","lastTransitionTime":"2025-11-25T14:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.074415 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.074676 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.074752 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.074860 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.074957 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:58Z","lastTransitionTime":"2025-11-25T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.155731 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.170636 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.179201 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.179478 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.179566 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.179677 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.179766 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:58Z","lastTransitionTime":"2025-11-25T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.181992 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.197234 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.207891 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.219343 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.231414 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8d24125-6de7-4781-99cf-179bdb0b2924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15ac6c20890546462a0b064f5768c553817a1a00c387aa426381b5b23f7e048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b846fdb2153d4bf8a88b6e2a9cdb2f531905b465447ca0f84de8256e87119034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfda1237eb6fc68c6f44d30364cc22bef01e4b3c3cc12a954d107954b609c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.245710 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.258845 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.272749 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.282269 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.282519 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.282627 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.282707 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.282777 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:58Z","lastTransitionTime":"2025-11-25T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.289367 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.304187 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.323245 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.338280 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.352379 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.371632 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.385335 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.385628 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.385702 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.385782 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.385903 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:58Z","lastTransitionTime":"2025-11-25T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.387578 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.407766 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"ts:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"b21188fe-5483-4717-afe6-20a41a40b91a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:58Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.488224 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.488307 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.488318 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.488338 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.488351 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:58Z","lastTransitionTime":"2025-11-25T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.591444 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.591475 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.591484 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.591500 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.591511 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:58Z","lastTransitionTime":"2025-11-25T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.694592 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.694673 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.694684 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.694708 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.694719 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:58Z","lastTransitionTime":"2025-11-25T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.796939 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.796978 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.796990 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.797007 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.797020 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:58Z","lastTransitionTime":"2025-11-25T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.810485 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:54:58 crc kubenswrapper[4731]: E1125 14:54:58.811003 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.811037 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.811235 4731 scope.go:117] "RemoveContainer" containerID="88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706" Nov 25 14:54:58 crc kubenswrapper[4731]: E1125 14:54:58.811346 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.811043 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.811088 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:54:58 crc kubenswrapper[4731]: E1125 14:54:58.813119 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:54:58 crc kubenswrapper[4731]: E1125 14:54:58.814717 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.899802 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.900109 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.900120 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.900135 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:58 crc kubenswrapper[4731]: I1125 14:54:58.900146 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:58Z","lastTransitionTime":"2025-11-25T14:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.002933 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.002972 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.002984 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.003002 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.003013 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:59Z","lastTransitionTime":"2025-11-25T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.105324 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.105379 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.105391 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.105409 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.105421 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:59Z","lastTransitionTime":"2025-11-25T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.213403 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.213471 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.213484 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.213503 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.213514 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:59Z","lastTransitionTime":"2025-11-25T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.310616 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovnkube-controller/1.log" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.313373 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerStarted","Data":"3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef"} Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.314015 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.315246 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.315292 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.315301 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.315317 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.315327 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:59Z","lastTransitionTime":"2025-11-25T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.330121 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.345211 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.357977 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.376159 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.390020 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.404603 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.417912 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.417967 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.417980 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.417997 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.418007 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:59Z","lastTransitionTime":"2025-11-25T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.419533 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.431787 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8d24125-6de7-4781-99cf-179bdb0b2924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15ac6c20890546462a0b064f5768c553817a1a00c387aa426381b5b23f7e048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b846fdb2153d4bf8a88b6e2a9cdb2f531905b465447ca0f84de8256e87119034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfda1237eb6fc68c6f44d30364cc22bef01e4b3c3cc12a954d107954b609c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.445319 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.472613 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.487992 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.502482 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.516298 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.519703 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.519738 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.519747 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.519763 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.519773 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:59Z","lastTransitionTime":"2025-11-25T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.529392 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.549365 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.564525 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.584638 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"ts:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"b21188fe-5483-4717-afe6-20a41a40b91a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.622268 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.622298 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.622307 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.622321 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.622330 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:59Z","lastTransitionTime":"2025-11-25T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.725334 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.725374 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.725386 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.725403 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.725451 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:59Z","lastTransitionTime":"2025-11-25T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.828215 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.828265 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.828281 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.828305 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.828319 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:59Z","lastTransitionTime":"2025-11-25T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.931314 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.931359 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.931371 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.931386 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:54:59 crc kubenswrapper[4731]: I1125 14:54:59.931397 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:54:59Z","lastTransitionTime":"2025-11-25T14:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.034438 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.034494 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.034508 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.034526 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.034538 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:00Z","lastTransitionTime":"2025-11-25T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.137030 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.137102 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.137117 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.137138 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.137150 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:00Z","lastTransitionTime":"2025-11-25T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.239855 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.239899 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.239915 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.239935 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.239950 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:00Z","lastTransitionTime":"2025-11-25T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.319902 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovnkube-controller/2.log" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.320768 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovnkube-controller/1.log" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.323702 4731 generic.go:334] "Generic (PLEG): container finished" podID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerID="3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef" exitCode=1 Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.323749 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerDied","Data":"3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef"} Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.323792 4731 scope.go:117] "RemoveContainer" containerID="88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.329504 4731 scope.go:117] "RemoveContainer" containerID="3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef" Nov 25 14:55:00 crc kubenswrapper[4731]: E1125 14:55:00.329921 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.339772 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8d24125-6de7-4781-99cf-179bdb0b2924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15ac6c20890546462a0b064f5768c553817a1a00c387aa426381b5b23f7e048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b846fdb2153d4bf8a88b6e2a9cdb2f531905b465447ca0f84de8256e87119034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfda1237eb6fc68c6f44d30364cc22bef01e4b3c3cc12a954d107954b609c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.342664 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.342699 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.342724 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.342740 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.342749 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:00Z","lastTransitionTime":"2025-11-25T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.355073 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.370019 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.382228 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.394020 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.408344 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.423869 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.438171 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.444503 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.444538 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.444549 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.444566 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.444576 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:00Z","lastTransitionTime":"2025-11-25T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.453190 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.468558 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.481609 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.494167 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.511731 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.531546 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://88d1c0e676b898c55c840d42749d156371b98c6055b8feaed119ab6f33347706\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"ts:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_switch_crc\\\\\\\", UUID:\\\\\\\"b21188fe-5483-4717-afe6-20a41a40b91a\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_default/kubernetes_TCP_node_router_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"default/kubernetes\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.1\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{services.Addr{IP:\\\\\\\"169.254.0.2\\\\\\\", Port:6443, Template:(*services.Template)(nil)}}}}, Templates:services.TemplateMap(nil), Switches:[]string(nil), Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Grou\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:59Z\\\",\\\"message\\\":\\\"ervices.lbConfig(nil)\\\\nF1125 14:54:59.642336 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:54:59.642343 6431 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI1125 14:54:59.642347 6431 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.544364 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.546994 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.547030 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.547042 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.547059 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.547079 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:00Z","lastTransitionTime":"2025-11-25T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.554361 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.568691 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:00Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.649660 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.649703 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.649748 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.649778 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.649793 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:00Z","lastTransitionTime":"2025-11-25T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.752106 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.752148 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.752158 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.752175 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.752186 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:00Z","lastTransitionTime":"2025-11-25T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.811080 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.811231 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:00 crc kubenswrapper[4731]: E1125 14:55:00.811421 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.811927 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:00 crc kubenswrapper[4731]: E1125 14:55:00.812236 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.812258 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:00 crc kubenswrapper[4731]: E1125 14:55:00.812352 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:00 crc kubenswrapper[4731]: E1125 14:55:00.812445 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.854519 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.854554 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.854563 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.854578 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.854588 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:00Z","lastTransitionTime":"2025-11-25T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.957758 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.957861 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.957877 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.957897 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:00 crc kubenswrapper[4731]: I1125 14:55:00.957910 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:00Z","lastTransitionTime":"2025-11-25T14:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.060401 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.060436 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.060444 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.060459 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.060468 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:01Z","lastTransitionTime":"2025-11-25T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.162982 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.163037 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.163049 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.163068 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.163084 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:01Z","lastTransitionTime":"2025-11-25T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.266422 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.266482 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.266493 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.266512 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.266546 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:01Z","lastTransitionTime":"2025-11-25T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.328977 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovnkube-controller/2.log" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.331914 4731 scope.go:117] "RemoveContainer" containerID="3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef" Nov 25 14:55:01 crc kubenswrapper[4731]: E1125 14:55:01.332180 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.348111 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.361769 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.369371 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.369408 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.369417 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.369431 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.369441 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:01Z","lastTransitionTime":"2025-11-25T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.375257 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.390647 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.405259 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.419006 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.431298 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.446285 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.470556 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:59Z\\\",\\\"message\\\":\\\"ervices.lbConfig(nil)\\\\nF1125 14:54:59.642336 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:54:59.642343 6431 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI1125 14:54:59.642347 6431 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.471785 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.471838 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.471882 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.471898 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.471908 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:01Z","lastTransitionTime":"2025-11-25T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.484784 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.498130 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.516419 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.528836 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.542226 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8d24125-6de7-4781-99cf-179bdb0b2924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15ac6c20890546462a0b064f5768c553817a1a00c387aa426381b5b23f7e048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b846fdb2153d4bf8a88b6e2a9cdb2f531905b465447ca0f84de8256e87119034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfda1237eb6fc68c6f44d30364cc22bef01e4b3c3cc12a954d107954b609c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.555158 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.570523 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.574280 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.574317 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.574328 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.574346 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.574361 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:01Z","lastTransitionTime":"2025-11-25T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.581184 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:01Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.676661 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.676709 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.676720 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.676736 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.676746 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:01Z","lastTransitionTime":"2025-11-25T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.779775 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.779841 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.779853 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.779870 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.779883 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:01Z","lastTransitionTime":"2025-11-25T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.882832 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.882873 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.882882 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.882897 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.882910 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:01Z","lastTransitionTime":"2025-11-25T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.985564 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.985638 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.985660 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.985691 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:01 crc kubenswrapper[4731]: I1125 14:55:01.985709 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:01Z","lastTransitionTime":"2025-11-25T14:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.088513 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.088573 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.088586 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.088610 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.088626 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:02Z","lastTransitionTime":"2025-11-25T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.191809 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.191913 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.191933 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.191961 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.191978 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:02Z","lastTransitionTime":"2025-11-25T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.294611 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.294672 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.294686 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.294706 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.294719 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:02Z","lastTransitionTime":"2025-11-25T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.396879 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.396918 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.396928 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.396945 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.396957 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:02Z","lastTransitionTime":"2025-11-25T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.498705 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.498752 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.498761 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.498783 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.498795 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:02Z","lastTransitionTime":"2025-11-25T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.600522 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.600577 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.600589 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.600607 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.600622 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:02Z","lastTransitionTime":"2025-11-25T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.703201 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.703232 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.703241 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.703254 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.703264 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:02Z","lastTransitionTime":"2025-11-25T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.806130 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.806170 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.806182 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.806199 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.806210 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:02Z","lastTransitionTime":"2025-11-25T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.811424 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.811523 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.811424 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:02 crc kubenswrapper[4731]: E1125 14:55:02.811552 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.811585 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:02 crc kubenswrapper[4731]: E1125 14:55:02.811633 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:02 crc kubenswrapper[4731]: E1125 14:55:02.811691 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:02 crc kubenswrapper[4731]: E1125 14:55:02.811749 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.825806 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.838041 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.850889 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.862433 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.874738 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.887338 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.900072 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.908246 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.908281 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.908289 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.908303 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.908313 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:02Z","lastTransitionTime":"2025-11-25T14:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.921084 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:59Z\\\",\\\"message\\\":\\\"ervices.lbConfig(nil)\\\\nF1125 14:54:59.642336 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:54:59.642343 6431 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI1125 14:54:59.642347 6431 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.935504 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.946659 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.963487 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.976606 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:02 crc kubenswrapper[4731]: I1125 14:55:02.987907 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8d24125-6de7-4781-99cf-179bdb0b2924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15ac6c20890546462a0b064f5768c553817a1a00c387aa426381b5b23f7e048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b846fdb2153d4bf8a88b6e2a9cdb2f531905b465447ca0f84de8256e87119034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfda1237eb6fc68c6f44d30364cc22bef01e4b3c3cc12a954d107954b609c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.000942 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:02Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.011015 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.011043 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.011052 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.011067 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.011081 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:03Z","lastTransitionTime":"2025-11-25T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.011096 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.019580 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.030595 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:03Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.112995 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.113027 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.113036 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.113053 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.113063 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:03Z","lastTransitionTime":"2025-11-25T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.215842 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.215873 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.215882 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.215897 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.215907 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:03Z","lastTransitionTime":"2025-11-25T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.318841 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.319168 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.319182 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.319204 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.319218 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:03Z","lastTransitionTime":"2025-11-25T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.421701 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.421738 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.421752 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.421771 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.421783 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:03Z","lastTransitionTime":"2025-11-25T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.524550 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.524587 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.524599 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.524615 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.524626 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:03Z","lastTransitionTime":"2025-11-25T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.627082 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.627128 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.627139 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.627156 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.627168 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:03Z","lastTransitionTime":"2025-11-25T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.730286 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.730331 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.730339 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.730355 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.730364 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:03Z","lastTransitionTime":"2025-11-25T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.833550 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.833601 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.833610 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.833625 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.833635 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:03Z","lastTransitionTime":"2025-11-25T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.936594 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.936643 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.936654 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.936675 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:03 crc kubenswrapper[4731]: I1125 14:55:03.936690 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:03Z","lastTransitionTime":"2025-11-25T14:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.039714 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.039770 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.039789 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.039840 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.039854 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:04Z","lastTransitionTime":"2025-11-25T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.142900 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.142943 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.142951 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.142969 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.142980 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:04Z","lastTransitionTime":"2025-11-25T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.245633 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.245666 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.245675 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.245690 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.245702 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:04Z","lastTransitionTime":"2025-11-25T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.348373 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.348408 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.348419 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.348436 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.348447 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:04Z","lastTransitionTime":"2025-11-25T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.451690 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.451766 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.451775 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.451798 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.451809 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:04Z","lastTransitionTime":"2025-11-25T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.554977 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.555026 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.555040 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.555059 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.555073 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:04Z","lastTransitionTime":"2025-11-25T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.659293 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.659341 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.659353 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.659371 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.659385 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:04Z","lastTransitionTime":"2025-11-25T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.762148 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.762198 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.762211 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.762232 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.762246 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:04Z","lastTransitionTime":"2025-11-25T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.811215 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.811288 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.811307 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.811304 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:04 crc kubenswrapper[4731]: E1125 14:55:04.811540 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:04 crc kubenswrapper[4731]: E1125 14:55:04.811657 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:04 crc kubenswrapper[4731]: E1125 14:55:04.812021 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:04 crc kubenswrapper[4731]: E1125 14:55:04.812186 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.864847 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.864900 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.864911 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.864938 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.864952 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:04Z","lastTransitionTime":"2025-11-25T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.967482 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.967527 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.967541 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.967562 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:04 crc kubenswrapper[4731]: I1125 14:55:04.967577 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:04Z","lastTransitionTime":"2025-11-25T14:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.070886 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.070967 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.070992 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.071027 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.071111 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:05Z","lastTransitionTime":"2025-11-25T14:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.175589 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.175653 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.175668 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.175703 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.175714 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:05Z","lastTransitionTime":"2025-11-25T14:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.278205 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.278240 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.278253 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.278271 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.278282 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:05Z","lastTransitionTime":"2025-11-25T14:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.381833 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.381888 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.381900 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.381923 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.381938 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:05Z","lastTransitionTime":"2025-11-25T14:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.484783 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.484842 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.484852 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.484870 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.484882 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:05Z","lastTransitionTime":"2025-11-25T14:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.587951 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.588009 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.588024 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.588049 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.588064 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:05Z","lastTransitionTime":"2025-11-25T14:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.691564 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.691603 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.691616 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.691633 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.691645 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:05Z","lastTransitionTime":"2025-11-25T14:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.794346 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.794375 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.794383 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.794400 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.794410 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:05Z","lastTransitionTime":"2025-11-25T14:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.897153 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.897184 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.897194 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.897210 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.897222 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:05Z","lastTransitionTime":"2025-11-25T14:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.999341 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.999447 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.999466 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.999482 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:05 crc kubenswrapper[4731]: I1125 14:55:05.999497 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:05Z","lastTransitionTime":"2025-11-25T14:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.102441 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.102491 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.102504 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.102524 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.102537 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:06Z","lastTransitionTime":"2025-11-25T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.205037 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.205108 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.205123 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.205145 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.205159 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:06Z","lastTransitionTime":"2025-11-25T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.307806 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.307884 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.307895 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.307915 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.307927 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:06Z","lastTransitionTime":"2025-11-25T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.410706 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.410748 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.410757 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.410773 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.410784 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:06Z","lastTransitionTime":"2025-11-25T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.513134 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.513178 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.513188 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.513203 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.513214 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:06Z","lastTransitionTime":"2025-11-25T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.615910 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.615961 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.615973 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.615996 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.616010 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:06Z","lastTransitionTime":"2025-11-25T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.718910 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.718981 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.719007 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.719040 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.719065 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:06Z","lastTransitionTime":"2025-11-25T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.767897 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.767951 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.767964 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.767984 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.767997 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:06Z","lastTransitionTime":"2025-11-25T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:06 crc kubenswrapper[4731]: E1125 14:55:06.782450 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.787191 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.787226 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.787246 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.787265 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.787280 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:06Z","lastTransitionTime":"2025-11-25T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:06 crc kubenswrapper[4731]: E1125 14:55:06.802261 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.806969 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.807001 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.807012 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.807033 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.807047 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:06Z","lastTransitionTime":"2025-11-25T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.810851 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.810904 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.811085 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:06 crc kubenswrapper[4731]: E1125 14:55:06.810964 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:06 crc kubenswrapper[4731]: E1125 14:55:06.811423 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:06 crc kubenswrapper[4731]: E1125 14:55:06.811265 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.811485 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:06 crc kubenswrapper[4731]: E1125 14:55:06.811713 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:06 crc kubenswrapper[4731]: E1125 14:55:06.822102 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.827810 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.827877 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.827892 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.827910 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.827923 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:06Z","lastTransitionTime":"2025-11-25T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:06 crc kubenswrapper[4731]: E1125 14:55:06.843412 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.847978 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.848016 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.848027 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.848044 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.848059 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:06Z","lastTransitionTime":"2025-11-25T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:06 crc kubenswrapper[4731]: E1125 14:55:06.863230 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:06Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:06 crc kubenswrapper[4731]: E1125 14:55:06.863410 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.865189 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.865224 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.865235 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.865254 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.865267 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:06Z","lastTransitionTime":"2025-11-25T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.967747 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.967793 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.967806 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.967836 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:06 crc kubenswrapper[4731]: I1125 14:55:06.967851 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:06Z","lastTransitionTime":"2025-11-25T14:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.070109 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.070141 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.070152 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.070166 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.070175 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:07Z","lastTransitionTime":"2025-11-25T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.172494 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.172524 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.172533 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.172548 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.172558 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:07Z","lastTransitionTime":"2025-11-25T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.275351 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.275409 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.275427 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.275452 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.275466 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:07Z","lastTransitionTime":"2025-11-25T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.377876 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.377917 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.377926 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.377944 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.377955 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:07Z","lastTransitionTime":"2025-11-25T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.480200 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.480255 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.480268 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.480286 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.480298 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:07Z","lastTransitionTime":"2025-11-25T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.582752 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.582796 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.582809 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.582851 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.582868 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:07Z","lastTransitionTime":"2025-11-25T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.685361 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.685404 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.685418 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.685435 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.685446 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:07Z","lastTransitionTime":"2025-11-25T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.788171 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.788212 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.788220 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.788237 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.788249 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:07Z","lastTransitionTime":"2025-11-25T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.891706 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.891783 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.891809 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.891933 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.891957 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:07Z","lastTransitionTime":"2025-11-25T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.994192 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.994228 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.994239 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.994254 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:07 crc kubenswrapper[4731]: I1125 14:55:07.994263 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:07Z","lastTransitionTime":"2025-11-25T14:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.096568 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.096620 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.096629 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.096647 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.096660 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:08Z","lastTransitionTime":"2025-11-25T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.199643 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.199693 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.199702 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.199720 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.199732 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:08Z","lastTransitionTime":"2025-11-25T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.302676 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.302732 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.302749 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.302768 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.302781 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:08Z","lastTransitionTime":"2025-11-25T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.405136 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.405202 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.405217 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.405241 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.405257 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:08Z","lastTransitionTime":"2025-11-25T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.509261 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.509307 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.509316 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.509333 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.509345 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:08Z","lastTransitionTime":"2025-11-25T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.611861 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.611909 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.611921 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.611940 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.611951 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:08Z","lastTransitionTime":"2025-11-25T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.715066 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.715105 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.715120 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.715139 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.715161 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:08Z","lastTransitionTime":"2025-11-25T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.811122 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.811189 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:08 crc kubenswrapper[4731]: E1125 14:55:08.811307 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.811387 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.811120 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:08 crc kubenswrapper[4731]: E1125 14:55:08.811518 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:08 crc kubenswrapper[4731]: E1125 14:55:08.811574 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:08 crc kubenswrapper[4731]: E1125 14:55:08.811656 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.817593 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.817637 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.817648 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.817663 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.817698 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:08Z","lastTransitionTime":"2025-11-25T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.920310 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.920343 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.920354 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.920370 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:08 crc kubenswrapper[4731]: I1125 14:55:08.920381 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:08Z","lastTransitionTime":"2025-11-25T14:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.023609 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.023668 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.023684 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.023703 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.023714 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:09Z","lastTransitionTime":"2025-11-25T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.126590 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.126636 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.126646 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.126661 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.126670 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:09Z","lastTransitionTime":"2025-11-25T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.229306 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.229356 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.229369 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.229386 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.229396 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:09Z","lastTransitionTime":"2025-11-25T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.332328 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.332555 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.332624 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.332706 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.332807 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:09Z","lastTransitionTime":"2025-11-25T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.439914 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.439955 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.439963 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.439977 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.439987 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:09Z","lastTransitionTime":"2025-11-25T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.542600 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.542643 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.542653 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.542671 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.542682 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:09Z","lastTransitionTime":"2025-11-25T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.645522 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.645570 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.645581 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.645597 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.645607 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:09Z","lastTransitionTime":"2025-11-25T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.749258 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.749526 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.749634 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.749802 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.749987 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:09Z","lastTransitionTime":"2025-11-25T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.852515 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.852879 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.852944 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.853034 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.853112 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:09Z","lastTransitionTime":"2025-11-25T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.956278 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.956352 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.956366 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.956394 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:09 crc kubenswrapper[4731]: I1125 14:55:09.956410 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:09Z","lastTransitionTime":"2025-11-25T14:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.059035 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.059114 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.059128 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.059150 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.059163 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:10Z","lastTransitionTime":"2025-11-25T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.161853 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.162518 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.162601 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.162675 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.162739 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:10Z","lastTransitionTime":"2025-11-25T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.265093 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.265166 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.265179 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.265198 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.265221 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:10Z","lastTransitionTime":"2025-11-25T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.367332 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.367378 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.367388 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.367407 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.367419 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:10Z","lastTransitionTime":"2025-11-25T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.471403 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.471452 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.471465 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.471485 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.471498 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:10Z","lastTransitionTime":"2025-11-25T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.574291 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.574551 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.574621 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.574686 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.574838 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:10Z","lastTransitionTime":"2025-11-25T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.677596 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.677665 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.677682 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.677710 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.677725 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:10Z","lastTransitionTime":"2025-11-25T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.780266 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.780303 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.780314 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.780329 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.780339 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:10Z","lastTransitionTime":"2025-11-25T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.810864 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.810864 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.810985 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.811056 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:10 crc kubenswrapper[4731]: E1125 14:55:10.811158 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:10 crc kubenswrapper[4731]: E1125 14:55:10.811260 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:10 crc kubenswrapper[4731]: E1125 14:55:10.811346 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:10 crc kubenswrapper[4731]: E1125 14:55:10.811496 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.882773 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.882856 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.882867 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.882884 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.882894 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:10Z","lastTransitionTime":"2025-11-25T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.985091 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.985137 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.985150 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.985168 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:10 crc kubenswrapper[4731]: I1125 14:55:10.985181 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:10Z","lastTransitionTime":"2025-11-25T14:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.087627 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.087679 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.087694 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.087715 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.087729 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:11Z","lastTransitionTime":"2025-11-25T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.190567 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.190612 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.190622 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.190641 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.190653 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:11Z","lastTransitionTime":"2025-11-25T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.293795 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.293916 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.293931 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.293953 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.293965 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:11Z","lastTransitionTime":"2025-11-25T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.396271 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.396335 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.396350 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.396372 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.396385 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:11Z","lastTransitionTime":"2025-11-25T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.499258 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.499300 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.499314 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.499334 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.499345 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:11Z","lastTransitionTime":"2025-11-25T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.602311 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.602351 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.602362 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.602377 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.602387 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:11Z","lastTransitionTime":"2025-11-25T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.704894 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.704924 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.704933 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.704947 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.704956 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:11Z","lastTransitionTime":"2025-11-25T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.807898 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.807940 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.807951 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.807968 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.807980 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:11Z","lastTransitionTime":"2025-11-25T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.910914 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.910961 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.910973 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.910996 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:11 crc kubenswrapper[4731]: I1125 14:55:11.911009 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:11Z","lastTransitionTime":"2025-11-25T14:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.013430 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.013465 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.013474 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.013487 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.013498 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:12Z","lastTransitionTime":"2025-11-25T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.115929 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.115965 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.115976 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.116021 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.116036 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:12Z","lastTransitionTime":"2025-11-25T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.218934 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.218976 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.218987 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.219002 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.219015 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:12Z","lastTransitionTime":"2025-11-25T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.323202 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.323265 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.323281 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.323304 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.323328 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:12Z","lastTransitionTime":"2025-11-25T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.425530 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.425571 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.425583 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.425600 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.425613 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:12Z","lastTransitionTime":"2025-11-25T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.527981 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.528021 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.528033 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.528049 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.528061 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:12Z","lastTransitionTime":"2025-11-25T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.631028 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.631088 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.631101 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.631121 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.631135 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:12Z","lastTransitionTime":"2025-11-25T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.733845 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.734041 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.734160 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.734269 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.734366 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:12Z","lastTransitionTime":"2025-11-25T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.811365 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.811438 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:12 crc kubenswrapper[4731]: E1125 14:55:12.811538 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.811446 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.811474 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:12 crc kubenswrapper[4731]: E1125 14:55:12.811653 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:12 crc kubenswrapper[4731]: E1125 14:55:12.811856 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:12 crc kubenswrapper[4731]: E1125 14:55:12.811944 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.829601 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.837644 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.837725 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.837740 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.837766 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.837780 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:12Z","lastTransitionTime":"2025-11-25T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.841336 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.854953 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.868672 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8d24125-6de7-4781-99cf-179bdb0b2924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15ac6c20890546462a0b064f5768c553817a1a00c387aa426381b5b23f7e048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b846fdb2153d4bf8a88b6e2a9cdb2f531905b465447ca0f84de8256e87119034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfda1237eb6fc68c6f44d30364cc22bef01e4b3c3cc12a954d107954b609c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.882850 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.894921 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.908401 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.921205 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.937124 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.940357 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.940397 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.940406 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.940427 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.940437 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:12Z","lastTransitionTime":"2025-11-25T14:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.951120 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.971751 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:12 crc kubenswrapper[4731]: I1125 14:55:12.990634 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:12Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.010158 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.023348 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.025955 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs\") pod \"network-metrics-daemon-7xvpg\" (UID: \"12559634-427c-4ce7-b9e8-d2dd37299642\") " pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:13 crc kubenswrapper[4731]: E1125 14:55:13.026165 4731 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:55:13 crc kubenswrapper[4731]: E1125 14:55:13.026273 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs podName:12559634-427c-4ce7-b9e8-d2dd37299642 nodeName:}" failed. No retries permitted until 2025-11-25 14:55:45.026246473 +0000 UTC m=+103.773381033 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs") pod "network-metrics-daemon-7xvpg" (UID: "12559634-427c-4ce7-b9e8-d2dd37299642") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.040978 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.042472 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.042506 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.042515 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.042530 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.042543 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:13Z","lastTransitionTime":"2025-11-25T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.056015 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.075952 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:59Z\\\",\\\"message\\\":\\\"ervices.lbConfig(nil)\\\\nF1125 14:54:59.642336 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:54:59.642343 6431 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI1125 14:54:59.642347 6431 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:13Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.144687 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.144726 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.144736 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.144752 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.144763 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:13Z","lastTransitionTime":"2025-11-25T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.247631 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.247691 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.247701 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.247722 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.247738 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:13Z","lastTransitionTime":"2025-11-25T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.350222 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.350272 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.350282 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.350300 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.350310 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:13Z","lastTransitionTime":"2025-11-25T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.452906 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.452988 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.453003 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.453022 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.453034 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:13Z","lastTransitionTime":"2025-11-25T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.556214 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.556258 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.556268 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.556283 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.556293 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:13Z","lastTransitionTime":"2025-11-25T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.658849 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.658895 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.658906 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.658927 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.658940 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:13Z","lastTransitionTime":"2025-11-25T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.761187 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.761235 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.761247 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.761264 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.761277 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:13Z","lastTransitionTime":"2025-11-25T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.862968 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.863008 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.863017 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.863031 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.863042 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:13Z","lastTransitionTime":"2025-11-25T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.965212 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.965252 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.965265 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.965282 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:13 crc kubenswrapper[4731]: I1125 14:55:13.965294 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:13Z","lastTransitionTime":"2025-11-25T14:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.067296 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.067340 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.067351 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.067368 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.067379 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:14Z","lastTransitionTime":"2025-11-25T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.170017 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.170059 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.170068 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.170084 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.170095 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:14Z","lastTransitionTime":"2025-11-25T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.272732 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.272773 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.272783 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.272802 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.272833 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:14Z","lastTransitionTime":"2025-11-25T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.375424 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.375478 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.375491 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.375509 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.375521 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:14Z","lastTransitionTime":"2025-11-25T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.478089 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.478127 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.478138 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.478154 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.478165 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:14Z","lastTransitionTime":"2025-11-25T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.580953 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.580999 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.581010 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.581027 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.581038 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:14Z","lastTransitionTime":"2025-11-25T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.683376 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.683423 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.683435 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.683460 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.683475 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:14Z","lastTransitionTime":"2025-11-25T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.785403 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.785456 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.785472 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.785492 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.785511 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:14Z","lastTransitionTime":"2025-11-25T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.810891 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.810891 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.811005 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:14 crc kubenswrapper[4731]: E1125 14:55:14.811301 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:14 crc kubenswrapper[4731]: E1125 14:55:14.811433 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:14 crc kubenswrapper[4731]: E1125 14:55:14.811525 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.811607 4731 scope.go:117] "RemoveContainer" containerID="3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef" Nov 25 14:55:14 crc kubenswrapper[4731]: E1125 14:55:14.811752 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.811901 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:14 crc kubenswrapper[4731]: E1125 14:55:14.812173 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.888472 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.888509 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.888518 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.888533 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.888545 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:14Z","lastTransitionTime":"2025-11-25T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.991793 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.991864 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.991877 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.991894 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:14 crc kubenswrapper[4731]: I1125 14:55:14.991912 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:14Z","lastTransitionTime":"2025-11-25T14:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.094187 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.094225 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.094234 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.094248 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.094258 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:15Z","lastTransitionTime":"2025-11-25T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.196575 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.196623 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.196635 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.196654 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.196669 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:15Z","lastTransitionTime":"2025-11-25T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.299725 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.299761 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.299771 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.299787 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.299799 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:15Z","lastTransitionTime":"2025-11-25T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.376313 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xx4w_062d0f18-096f-4f99-9faa-74dbd454bcd0/kube-multus/0.log" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.376365 4731 generic.go:334] "Generic (PLEG): container finished" podID="062d0f18-096f-4f99-9faa-74dbd454bcd0" containerID="b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac" exitCode=1 Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.376397 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8xx4w" event={"ID":"062d0f18-096f-4f99-9faa-74dbd454bcd0","Type":"ContainerDied","Data":"b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac"} Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.376778 4731 scope.go:117] "RemoveContainer" containerID="b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.389703 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.402131 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.402255 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.402263 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.402279 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.402289 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:15Z","lastTransitionTime":"2025-11-25T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.407568 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:59Z\\\",\\\"message\\\":\\\"ervices.lbConfig(nil)\\\\nF1125 14:54:59.642336 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:54:59.642343 6431 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI1125 14:54:59.642347 6431 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.425520 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.438003 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.448677 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.464048 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.477843 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.487457 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.497954 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.504796 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.504845 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.504859 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.504877 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.504889 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:15Z","lastTransitionTime":"2025-11-25T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.510498 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8d24125-6de7-4781-99cf-179bdb0b2924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15ac6c20890546462a0b064f5768c553817a1a00c387aa426381b5b23f7e048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b846fdb2153d4bf8a88b6e2a9cdb2f531905b465447ca0f84de8256e87119034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfda1237eb6fc68c6f44d30364cc22bef01e4b3c3cc12a954d107954b609c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.523934 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.535715 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.546803 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.560775 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:55:15Z\\\",\\\"message\\\":\\\"2025-11-25T14:54:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816\\\\n2025-11-25T14:54:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816 to /host/opt/cni/bin/\\\\n2025-11-25T14:54:29Z [verbose] multus-daemon started\\\\n2025-11-25T14:54:29Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:55:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.574134 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.587739 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.601103 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:15Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.607600 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.607650 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.607663 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.607680 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.607690 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:15Z","lastTransitionTime":"2025-11-25T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.710611 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.710655 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.710664 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.710683 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.710696 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:15Z","lastTransitionTime":"2025-11-25T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.812775 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.813170 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.813269 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.813370 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.813470 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:15Z","lastTransitionTime":"2025-11-25T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.915570 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.915610 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.915620 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.915635 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:15 crc kubenswrapper[4731]: I1125 14:55:15.915645 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:15Z","lastTransitionTime":"2025-11-25T14:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.018165 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.018215 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.018231 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.018251 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.018264 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:16Z","lastTransitionTime":"2025-11-25T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.121107 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.121159 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.121171 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.121187 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.121197 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:16Z","lastTransitionTime":"2025-11-25T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.223494 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.223544 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.223555 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.223572 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.223595 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:16Z","lastTransitionTime":"2025-11-25T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.325732 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.325773 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.325781 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.325797 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.325809 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:16Z","lastTransitionTime":"2025-11-25T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.381190 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xx4w_062d0f18-096f-4f99-9faa-74dbd454bcd0/kube-multus/0.log" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.381248 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8xx4w" event={"ID":"062d0f18-096f-4f99-9faa-74dbd454bcd0","Type":"ContainerStarted","Data":"471b56a9f37dafa266c74c8ad94d25f2ba08b9c1f907a69ba186a3ff2b1517d2"} Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.396247 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://471b56a9f37dafa266c74c8ad94d25f2ba08b9c1f907a69ba186a3ff2b1517d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:55:15Z\\\",\\\"message\\\":\\\"2025-11-25T14:54:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816\\\\n2025-11-25T14:54:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816 to /host/opt/cni/bin/\\\\n2025-11-25T14:54:29Z [verbose] multus-daemon started\\\\n2025-11-25T14:54:29Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:55:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.405610 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.413754 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.427443 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.428284 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.428332 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.428345 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.428365 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.428378 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:16Z","lastTransitionTime":"2025-11-25T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.442345 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.456132 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.469779 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.481093 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.499113 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:59Z\\\",\\\"message\\\":\\\"ervices.lbConfig(nil)\\\\nF1125 14:54:59.642336 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:54:59.642343 6431 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI1125 14:54:59.642347 6431 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.513081 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.524660 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.531280 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.531322 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.531333 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.531350 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.531364 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:16Z","lastTransitionTime":"2025-11-25T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.541130 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.554306 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.568485 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8d24125-6de7-4781-99cf-179bdb0b2924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15ac6c20890546462a0b064f5768c553817a1a00c387aa426381b5b23f7e048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b846fdb2153d4bf8a88b6e2a9cdb2f531905b465447ca0f84de8256e87119034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfda1237eb6fc68c6f44d30364cc22bef01e4b3c3cc12a954d107954b609c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.583940 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.598364 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.609560 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:16Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.633664 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.633718 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.633731 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.633750 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.633763 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:16Z","lastTransitionTime":"2025-11-25T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.736613 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.736653 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.736664 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.736682 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.736695 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:16Z","lastTransitionTime":"2025-11-25T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.811241 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.811292 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:16 crc kubenswrapper[4731]: E1125 14:55:16.811391 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.811653 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.813778 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:16 crc kubenswrapper[4731]: E1125 14:55:16.814125 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:16 crc kubenswrapper[4731]: E1125 14:55:16.814388 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:16 crc kubenswrapper[4731]: E1125 14:55:16.816671 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.839069 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.839111 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.839122 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.839138 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.839150 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:16Z","lastTransitionTime":"2025-11-25T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.941844 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.941890 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.941901 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.941916 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:16 crc kubenswrapper[4731]: I1125 14:55:16.941925 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:16Z","lastTransitionTime":"2025-11-25T14:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.044229 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.044257 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.044264 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.044280 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.044290 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:17Z","lastTransitionTime":"2025-11-25T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.146759 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.146797 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.146807 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.146840 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.146852 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:17Z","lastTransitionTime":"2025-11-25T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.201282 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.201328 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.201337 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.201358 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.201369 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:17Z","lastTransitionTime":"2025-11-25T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:17 crc kubenswrapper[4731]: E1125 14:55:17.214198 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.218243 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.218288 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.218298 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.218314 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.218324 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:17Z","lastTransitionTime":"2025-11-25T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:17 crc kubenswrapper[4731]: E1125 14:55:17.232219 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.236885 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.236925 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.236934 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.236949 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.236960 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:17Z","lastTransitionTime":"2025-11-25T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:17 crc kubenswrapper[4731]: E1125 14:55:17.250001 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.253654 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.253680 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.253688 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.253702 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.253712 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:17Z","lastTransitionTime":"2025-11-25T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:17 crc kubenswrapper[4731]: E1125 14:55:17.264972 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.268661 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.268706 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.268717 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.268743 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.268754 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:17Z","lastTransitionTime":"2025-11-25T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:17 crc kubenswrapper[4731]: E1125 14:55:17.280534 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:17Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:17 crc kubenswrapper[4731]: E1125 14:55:17.280715 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.282342 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.282361 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.282368 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.282383 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.282391 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:17Z","lastTransitionTime":"2025-11-25T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.384231 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.384268 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.384278 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.384292 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.384303 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:17Z","lastTransitionTime":"2025-11-25T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.486727 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.486770 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.486780 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.486797 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.486807 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:17Z","lastTransitionTime":"2025-11-25T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.589689 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.589732 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.589744 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.589764 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.589776 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:17Z","lastTransitionTime":"2025-11-25T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.693197 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.693281 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.693300 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.693338 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.693356 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:17Z","lastTransitionTime":"2025-11-25T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.796122 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.796160 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.796170 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.796184 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.796194 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:17Z","lastTransitionTime":"2025-11-25T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.898549 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.898593 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.898605 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.898660 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:17 crc kubenswrapper[4731]: I1125 14:55:17.898678 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:17Z","lastTransitionTime":"2025-11-25T14:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.001769 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.001859 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.001874 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.001894 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.001906 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:18Z","lastTransitionTime":"2025-11-25T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.104212 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.104517 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.104769 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.104978 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.105130 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:18Z","lastTransitionTime":"2025-11-25T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.207704 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.208093 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.208296 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.208439 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.208539 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:18Z","lastTransitionTime":"2025-11-25T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.310488 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.310519 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.310528 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.310543 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.310552 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:18Z","lastTransitionTime":"2025-11-25T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.415723 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.415771 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.415788 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.415810 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.415840 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:18Z","lastTransitionTime":"2025-11-25T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.518781 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.518900 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.518916 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.518935 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.518948 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:18Z","lastTransitionTime":"2025-11-25T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.620861 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.620904 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.620916 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.620935 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.620947 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:18Z","lastTransitionTime":"2025-11-25T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.722923 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.722963 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.722974 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.722990 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.722998 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:18Z","lastTransitionTime":"2025-11-25T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.810712 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.810752 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.810760 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:18 crc kubenswrapper[4731]: E1125 14:55:18.810871 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.811038 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:18 crc kubenswrapper[4731]: E1125 14:55:18.811084 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:18 crc kubenswrapper[4731]: E1125 14:55:18.811189 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:18 crc kubenswrapper[4731]: E1125 14:55:18.812242 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.825072 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.825109 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.825118 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.825133 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.825141 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:18Z","lastTransitionTime":"2025-11-25T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.927721 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.927774 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.927785 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.927805 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:18 crc kubenswrapper[4731]: I1125 14:55:18.927835 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:18Z","lastTransitionTime":"2025-11-25T14:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.031030 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.031077 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.031088 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.031107 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.031119 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:19Z","lastTransitionTime":"2025-11-25T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.133877 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.133913 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.133922 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.133937 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.133947 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:19Z","lastTransitionTime":"2025-11-25T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.236660 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.236695 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.236704 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.236718 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.236728 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:19Z","lastTransitionTime":"2025-11-25T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.339709 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.340179 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.340260 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.340325 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.340412 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:19Z","lastTransitionTime":"2025-11-25T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.442604 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.442640 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.442648 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.442663 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.442672 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:19Z","lastTransitionTime":"2025-11-25T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.545182 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.545225 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.545234 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.545251 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.545265 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:19Z","lastTransitionTime":"2025-11-25T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.648439 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.648493 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.648506 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.648529 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.648542 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:19Z","lastTransitionTime":"2025-11-25T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.750681 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.750740 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.750753 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.750773 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.750785 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:19Z","lastTransitionTime":"2025-11-25T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.853214 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.853261 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.853272 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.853288 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.853299 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:19Z","lastTransitionTime":"2025-11-25T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.955900 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.955954 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.955968 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.955988 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:19 crc kubenswrapper[4731]: I1125 14:55:19.956001 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:19Z","lastTransitionTime":"2025-11-25T14:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.058253 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.058294 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.058303 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.058319 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.058330 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:20Z","lastTransitionTime":"2025-11-25T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.160849 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.160895 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.160905 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.160920 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.160929 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:20Z","lastTransitionTime":"2025-11-25T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.263721 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.263770 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.263778 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.263793 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.263803 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:20Z","lastTransitionTime":"2025-11-25T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.366326 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.366357 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.366365 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.366379 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.366388 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:20Z","lastTransitionTime":"2025-11-25T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.469157 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.469204 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.469217 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.469235 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.469247 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:20Z","lastTransitionTime":"2025-11-25T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.571700 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.571744 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.571754 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.571770 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.571780 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:20Z","lastTransitionTime":"2025-11-25T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.674424 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.674477 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.674490 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.674510 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.674523 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:20Z","lastTransitionTime":"2025-11-25T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.777284 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.777357 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.777369 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.777385 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.777393 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:20Z","lastTransitionTime":"2025-11-25T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.812041 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.812157 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.812191 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.812215 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:20 crc kubenswrapper[4731]: E1125 14:55:20.812311 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:20 crc kubenswrapper[4731]: E1125 14:55:20.812436 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:20 crc kubenswrapper[4731]: E1125 14:55:20.812551 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:20 crc kubenswrapper[4731]: E1125 14:55:20.812669 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.879369 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.879422 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.879430 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.879445 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.879454 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:20Z","lastTransitionTime":"2025-11-25T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.982294 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.982343 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.982351 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.982368 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:20 crc kubenswrapper[4731]: I1125 14:55:20.982380 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:20Z","lastTransitionTime":"2025-11-25T14:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.084312 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.084383 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.084392 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.084407 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.084416 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:21Z","lastTransitionTime":"2025-11-25T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.186999 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.187060 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.187071 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.187086 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.187097 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:21Z","lastTransitionTime":"2025-11-25T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.289786 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.289869 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.289884 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.289905 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.289939 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:21Z","lastTransitionTime":"2025-11-25T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.392629 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.392672 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.392681 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.392703 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.392724 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:21Z","lastTransitionTime":"2025-11-25T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.495300 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.495347 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.495357 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.495376 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.495387 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:21Z","lastTransitionTime":"2025-11-25T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.597904 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.597943 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.597953 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.597969 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.597982 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:21Z","lastTransitionTime":"2025-11-25T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.701140 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.701197 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.701208 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.701224 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.701237 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:21Z","lastTransitionTime":"2025-11-25T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.803442 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.803487 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.803496 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.803510 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.803519 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:21Z","lastTransitionTime":"2025-11-25T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.906026 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.906112 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.906125 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.906141 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:21 crc kubenswrapper[4731]: I1125 14:55:21.906152 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:21Z","lastTransitionTime":"2025-11-25T14:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.008202 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.008250 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.008267 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.008290 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.008305 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:22Z","lastTransitionTime":"2025-11-25T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.110726 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.110759 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.110768 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.110782 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.110792 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:22Z","lastTransitionTime":"2025-11-25T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.213784 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.213854 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.213865 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.213880 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.213893 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:22Z","lastTransitionTime":"2025-11-25T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.318412 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.318465 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.318478 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.318497 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.318515 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:22Z","lastTransitionTime":"2025-11-25T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.420571 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.420626 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.420639 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.420659 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.420673 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:22Z","lastTransitionTime":"2025-11-25T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.523440 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.523482 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.523492 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.523509 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.523522 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:22Z","lastTransitionTime":"2025-11-25T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.625494 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.625572 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.625582 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.625600 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.625616 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:22Z","lastTransitionTime":"2025-11-25T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.728646 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.728697 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.728709 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.728725 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.728734 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:22Z","lastTransitionTime":"2025-11-25T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.811475 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:22 crc kubenswrapper[4731]: E1125 14:55:22.811622 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.811810 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:22 crc kubenswrapper[4731]: E1125 14:55:22.811880 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.811980 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:22 crc kubenswrapper[4731]: E1125 14:55:22.812030 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.812130 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:22 crc kubenswrapper[4731]: E1125 14:55:22.812266 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.824948 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.831687 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.831742 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.831752 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.831769 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.831781 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:22Z","lastTransitionTime":"2025-11-25T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.846008 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:59Z\\\",\\\"message\\\":\\\"ervices.lbConfig(nil)\\\\nF1125 14:54:59.642336 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:54:59.642343 6431 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI1125 14:54:59.642347 6431 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.859718 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.871568 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.888238 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.899530 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.914292 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8d24125-6de7-4781-99cf-179bdb0b2924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15ac6c20890546462a0b064f5768c553817a1a00c387aa426381b5b23f7e048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b846fdb2153d4bf8a88b6e2a9cdb2f531905b465447ca0f84de8256e87119034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfda1237eb6fc68c6f44d30364cc22bef01e4b3c3cc12a954d107954b609c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.928858 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.934197 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.934240 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.934254 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.934275 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.934288 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:22Z","lastTransitionTime":"2025-11-25T14:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.943049 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.955883 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.972297 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://471b56a9f37dafa266c74c8ad94d25f2ba08b9c1f907a69ba186a3ff2b1517d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:55:15Z\\\",\\\"message\\\":\\\"2025-11-25T14:54:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816\\\\n2025-11-25T14:54:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816 to /host/opt/cni/bin/\\\\n2025-11-25T14:54:29Z [verbose] multus-daemon started\\\\n2025-11-25T14:54:29Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:55:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.984008 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:22 crc kubenswrapper[4731]: I1125 14:55:22.995927 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:22Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.012238 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.027575 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.037538 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.037579 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.037588 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.037602 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.037612 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:23Z","lastTransitionTime":"2025-11-25T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.042924 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.058765 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:23Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.139782 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.139832 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.139840 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.139855 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.139863 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:23Z","lastTransitionTime":"2025-11-25T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.241884 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.241934 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.241948 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.241969 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.242003 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:23Z","lastTransitionTime":"2025-11-25T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.343885 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.343923 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.343933 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.343947 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.343957 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:23Z","lastTransitionTime":"2025-11-25T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.446077 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.446114 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.446123 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.446186 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.446197 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:23Z","lastTransitionTime":"2025-11-25T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.548541 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.548577 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.548587 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.548601 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.548611 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:23Z","lastTransitionTime":"2025-11-25T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.651778 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.652075 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.652142 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.652220 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.652281 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:23Z","lastTransitionTime":"2025-11-25T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.755313 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.755753 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.755905 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.756015 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.756112 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:23Z","lastTransitionTime":"2025-11-25T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.858542 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.858594 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.858605 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.858622 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.858636 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:23Z","lastTransitionTime":"2025-11-25T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.960452 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.960491 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.960501 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.960516 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:23 crc kubenswrapper[4731]: I1125 14:55:23.960524 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:23Z","lastTransitionTime":"2025-11-25T14:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.062171 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.062427 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.062522 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.062642 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.062851 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:24Z","lastTransitionTime":"2025-11-25T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.165470 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.165513 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.165526 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.165542 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.165553 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:24Z","lastTransitionTime":"2025-11-25T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.268564 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.268833 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.268920 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.269077 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.269161 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:24Z","lastTransitionTime":"2025-11-25T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.370962 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.371223 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.371297 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.371379 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.371442 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:24Z","lastTransitionTime":"2025-11-25T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.473319 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.473369 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.473378 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.473398 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.473407 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:24Z","lastTransitionTime":"2025-11-25T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.576991 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.577634 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.577710 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.577790 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.577881 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:24Z","lastTransitionTime":"2025-11-25T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.680417 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.680723 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.680826 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.680897 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.681003 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:24Z","lastTransitionTime":"2025-11-25T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.783451 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.783499 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.783512 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.783530 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.783544 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:24Z","lastTransitionTime":"2025-11-25T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.810786 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.810927 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:24 crc kubenswrapper[4731]: E1125 14:55:24.810959 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:24 crc kubenswrapper[4731]: E1125 14:55:24.811082 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.810788 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:24 crc kubenswrapper[4731]: E1125 14:55:24.811229 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.811099 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:24 crc kubenswrapper[4731]: E1125 14:55:24.811366 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.887014 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.887078 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.887091 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.887112 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.887126 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:24Z","lastTransitionTime":"2025-11-25T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.990606 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.990661 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.990671 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.990690 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:24 crc kubenswrapper[4731]: I1125 14:55:24.990704 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:24Z","lastTransitionTime":"2025-11-25T14:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.094147 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.094193 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.094203 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.094221 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.094232 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:25Z","lastTransitionTime":"2025-11-25T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.197536 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.197584 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.197596 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.197618 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.197633 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:25Z","lastTransitionTime":"2025-11-25T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.300893 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.300942 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.300953 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.300970 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.300980 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:25Z","lastTransitionTime":"2025-11-25T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.404037 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.404089 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.404102 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.404122 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.404133 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:25Z","lastTransitionTime":"2025-11-25T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.506521 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.506560 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.506570 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.506585 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.506595 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:25Z","lastTransitionTime":"2025-11-25T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.608965 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.609000 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.609012 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.609029 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.609042 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:25Z","lastTransitionTime":"2025-11-25T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.711859 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.711932 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.711947 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.711966 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.711978 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:25Z","lastTransitionTime":"2025-11-25T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.815017 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.815092 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.815108 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.815130 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.815147 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:25Z","lastTransitionTime":"2025-11-25T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.917976 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.918040 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.918049 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.918067 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:25 crc kubenswrapper[4731]: I1125 14:55:25.918080 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:25Z","lastTransitionTime":"2025-11-25T14:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.021684 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.022307 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.022478 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.022644 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.022790 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:26Z","lastTransitionTime":"2025-11-25T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.125750 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.125800 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.125828 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.125845 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.125855 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:26Z","lastTransitionTime":"2025-11-25T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.228656 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.228704 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.228715 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.228733 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.228746 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:26Z","lastTransitionTime":"2025-11-25T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.331728 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.331794 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.331812 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.331861 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.331879 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:26Z","lastTransitionTime":"2025-11-25T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.434440 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.434482 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.434494 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.434512 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.434524 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:26Z","lastTransitionTime":"2025-11-25T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.537768 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.537857 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.537872 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.537902 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.537920 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:26Z","lastTransitionTime":"2025-11-25T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.640528 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.640581 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.640592 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.640608 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.640617 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:26Z","lastTransitionTime":"2025-11-25T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.743454 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.743498 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.743509 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.743526 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.743538 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:26Z","lastTransitionTime":"2025-11-25T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.811289 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.811299 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:26 crc kubenswrapper[4731]: E1125 14:55:26.811445 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.811319 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:26 crc kubenswrapper[4731]: E1125 14:55:26.811580 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.811306 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:26 crc kubenswrapper[4731]: E1125 14:55:26.811862 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:26 crc kubenswrapper[4731]: E1125 14:55:26.811932 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.845659 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.845691 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.845702 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.845716 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.845726 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:26Z","lastTransitionTime":"2025-11-25T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.948341 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.948721 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.948793 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.948917 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:26 crc kubenswrapper[4731]: I1125 14:55:26.948992 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:26Z","lastTransitionTime":"2025-11-25T14:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.051296 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.051690 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.051790 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.051908 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.052011 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:27Z","lastTransitionTime":"2025-11-25T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.154904 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.154977 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.155000 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.155019 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.155032 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:27Z","lastTransitionTime":"2025-11-25T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.257697 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.257746 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.257760 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.257779 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.257795 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:27Z","lastTransitionTime":"2025-11-25T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.360151 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.360193 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.360203 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.360221 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.360233 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:27Z","lastTransitionTime":"2025-11-25T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.377966 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.378000 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.378008 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.378023 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.378032 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:27Z","lastTransitionTime":"2025-11-25T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:27 crc kubenswrapper[4731]: E1125 14:55:27.389374 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.393656 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.393684 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.393696 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.393709 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.393718 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:27Z","lastTransitionTime":"2025-11-25T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:27 crc kubenswrapper[4731]: E1125 14:55:27.405211 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.409675 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.409727 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.409737 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.409753 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.409764 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:27Z","lastTransitionTime":"2025-11-25T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:27 crc kubenswrapper[4731]: E1125 14:55:27.424214 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.428195 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.428229 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.428243 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.428264 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.428276 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:27Z","lastTransitionTime":"2025-11-25T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:27 crc kubenswrapper[4731]: E1125 14:55:27.444993 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.448216 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.448249 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.448261 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.448278 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.448290 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:27Z","lastTransitionTime":"2025-11-25T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:27 crc kubenswrapper[4731]: E1125 14:55:27.460186 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:27Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:27 crc kubenswrapper[4731]: E1125 14:55:27.460345 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.462077 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.462166 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.462182 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.462223 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.462238 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:27Z","lastTransitionTime":"2025-11-25T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.564571 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.564608 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.564617 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.564632 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.564642 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:27Z","lastTransitionTime":"2025-11-25T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.668044 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.668086 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.668095 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.668110 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.668120 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:27Z","lastTransitionTime":"2025-11-25T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.771249 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.771711 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.771941 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.772123 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.772269 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:27Z","lastTransitionTime":"2025-11-25T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.878099 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.878189 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.878213 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.878244 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.878265 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:27Z","lastTransitionTime":"2025-11-25T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.981374 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.981698 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.981781 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.981904 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:27 crc kubenswrapper[4731]: I1125 14:55:27.981999 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:27Z","lastTransitionTime":"2025-11-25T14:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.084847 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.085093 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.085159 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.085235 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.085307 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:28Z","lastTransitionTime":"2025-11-25T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.188118 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.188162 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.188179 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.188201 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.188213 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:28Z","lastTransitionTime":"2025-11-25T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.291578 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.291638 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.291655 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.291678 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.291702 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:28Z","lastTransitionTime":"2025-11-25T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.394599 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.394917 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.394990 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.395071 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.395135 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:28Z","lastTransitionTime":"2025-11-25T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.497896 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.497946 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.497957 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.497975 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.497987 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:28Z","lastTransitionTime":"2025-11-25T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.599909 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.599956 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.599964 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.599981 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.599990 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:28Z","lastTransitionTime":"2025-11-25T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.702558 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.702616 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.702632 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.702651 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.702663 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:28Z","lastTransitionTime":"2025-11-25T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.799630 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.799760 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.799811 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:32.79979223 +0000 UTC m=+151.546926660 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.799916 4731 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.800024 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:56:32.799996426 +0000 UTC m=+151.547131036 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.800028 4731 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.799937 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.800074 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:56:32.800067297 +0000 UTC m=+151.547201727 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.805357 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.805408 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.805421 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.805441 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.805457 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:28Z","lastTransitionTime":"2025-11-25T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.810737 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.810772 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.810740 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.810738 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.810881 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.810951 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.811000 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.811056 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.901156 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.901253 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.901400 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.901424 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.901440 4731 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.901504 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-11-25 14:56:32.901487316 +0000 UTC m=+151.648621746 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.901520 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.901574 4731 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.901595 4731 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:55:28 crc kubenswrapper[4731]: E1125 14:55:28.901685 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-11-25 14:56:32.901658491 +0000 UTC m=+151.648792961 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.907488 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.907531 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.907541 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.907554 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:28 crc kubenswrapper[4731]: I1125 14:55:28.907564 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:28Z","lastTransitionTime":"2025-11-25T14:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.010682 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.010749 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.010768 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.010796 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.010843 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:29Z","lastTransitionTime":"2025-11-25T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.114154 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.114199 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.114209 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.114227 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.114242 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:29Z","lastTransitionTime":"2025-11-25T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.217466 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.217536 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.217556 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.217581 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.217598 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:29Z","lastTransitionTime":"2025-11-25T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.320784 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.321299 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.321528 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.321680 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.321858 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:29Z","lastTransitionTime":"2025-11-25T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.425000 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.425071 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.425085 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.425105 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.425121 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:29Z","lastTransitionTime":"2025-11-25T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.528339 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.528411 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.528438 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.528480 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.528500 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:29Z","lastTransitionTime":"2025-11-25T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.632444 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.632869 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.633028 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.633183 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.633330 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:29Z","lastTransitionTime":"2025-11-25T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.736701 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.736944 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.736986 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.737010 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.737021 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:29Z","lastTransitionTime":"2025-11-25T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.812942 4731 scope.go:117] "RemoveContainer" containerID="3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.824642 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.840075 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.840146 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.840164 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.840191 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.840210 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:29Z","lastTransitionTime":"2025-11-25T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.943129 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.943166 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.943176 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.943191 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:29 crc kubenswrapper[4731]: I1125 14:55:29.943202 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:29Z","lastTransitionTime":"2025-11-25T14:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.045902 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.046000 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.046017 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.046042 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.046095 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:30Z","lastTransitionTime":"2025-11-25T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.149097 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.149149 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.149160 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.149177 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.149190 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:30Z","lastTransitionTime":"2025-11-25T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.252400 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.252448 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.252457 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.252477 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.252488 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:30Z","lastTransitionTime":"2025-11-25T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.354712 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.354758 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.354767 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.354810 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.354835 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:30Z","lastTransitionTime":"2025-11-25T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.421009 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovnkube-controller/2.log" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.423690 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerStarted","Data":"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6"} Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.424497 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.438971 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://471b56a9f37dafa266c74c8ad94d25f2ba08b9c1f907a69ba186a3ff2b1517d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:55:15Z\\\",\\\"message\\\":\\\"2025-11-25T14:54:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816\\\\n2025-11-25T14:54:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816 to /host/opt/cni/bin/\\\\n2025-11-25T14:54:29Z [verbose] multus-daemon started\\\\n2025-11-25T14:54:29Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:55:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.452064 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.456938 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.456986 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.457000 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.457020 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.457033 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:30Z","lastTransitionTime":"2025-11-25T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.465311 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.479564 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.494835 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.508676 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.521711 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.533632 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.550996 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:59Z\\\",\\\"message\\\":\\\"ervices.lbConfig(nil)\\\\nF1125 14:54:59.642336 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:54:59.642343 6431 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI1125 14:54:59.642347 6431 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.559310 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.559339 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.559348 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.559363 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.559373 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:30Z","lastTransitionTime":"2025-11-25T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.563712 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.574328 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.588102 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.598891 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.608878 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd2e860f-53e3-44ad-98a2-c7e06d03d0dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1c02f6a900147229d88be040478ffbd81f6aac846bfd5c10174be1a6fbbb613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c00208608937498741782f5d8da22301bc73fa162a56b87e97492fb210cdb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c00208608937498741782f5d8da22301bc73fa162a56b87e97492fb210cdb19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.620014 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8d24125-6de7-4781-99cf-179bdb0b2924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15ac6c20890546462a0b064f5768c553817a1a00c387aa426381b5b23f7e048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b846fdb2153d4bf8a88b6e2a9cdb2f531905b465447ca0f84de8256e87119034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfda1237eb6fc68c6f44d30364cc22bef01e4b3c3cc12a954d107954b609c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.632074 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.642149 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.651192 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:30Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.661888 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.661933 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.661945 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.661968 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.661979 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:30Z","lastTransitionTime":"2025-11-25T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.764046 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.764088 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.764097 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.764114 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.764123 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:30Z","lastTransitionTime":"2025-11-25T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.811522 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:30 crc kubenswrapper[4731]: E1125 14:55:30.811641 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.811486 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.811701 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:30 crc kubenswrapper[4731]: E1125 14:55:30.811735 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.811713 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:30 crc kubenswrapper[4731]: E1125 14:55:30.811902 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:30 crc kubenswrapper[4731]: E1125 14:55:30.812025 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.866696 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.866762 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.866776 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.866796 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.866808 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:30Z","lastTransitionTime":"2025-11-25T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.969889 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.969945 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.969957 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.969977 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:30 crc kubenswrapper[4731]: I1125 14:55:30.969992 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:30Z","lastTransitionTime":"2025-11-25T14:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.071959 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.071993 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.072002 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.072017 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.072027 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:31Z","lastTransitionTime":"2025-11-25T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.175040 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.175311 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.175323 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.175340 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.175352 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:31Z","lastTransitionTime":"2025-11-25T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.279554 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.279609 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.279623 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.279644 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.279662 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:31Z","lastTransitionTime":"2025-11-25T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.382844 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.382919 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.382939 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.382969 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.382987 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:31Z","lastTransitionTime":"2025-11-25T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.429567 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovnkube-controller/3.log" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.430247 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovnkube-controller/2.log" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.433911 4731 generic.go:334] "Generic (PLEG): container finished" podID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerID="fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6" exitCode=1 Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.433973 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerDied","Data":"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6"} Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.434025 4731 scope.go:117] "RemoveContainer" containerID="3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.434907 4731 scope.go:117] "RemoveContainer" containerID="fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6" Nov 25 14:55:31 crc kubenswrapper[4731]: E1125 14:55:31.435126 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.450938 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.478863 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3869fc951c7bff1beacf37457b72e71c39e5af1972a9b74b8af504715a323eef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:54:59Z\\\",\\\"message\\\":\\\"ervices.lbConfig(nil)\\\\nF1125 14:54:59.642336 6431 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:54:59Z is after 2025-08-24T17:21:41Z]\\\\nI1125 14:54:59.642343 6431 loadbalancer.go:304] Deleted 0 stale LBs for map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-daemon\\\\\\\"}\\\\nI1125 14:54:59.642347 6431 services_controller.go:451] Built service openshift-marketplace/redhat-marketplace cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/redhat-marketplace_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:55:30Z\\\",\\\"message\\\":\\\"134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1125 14:55:30.697617 6854 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1125 14:55:30.697618 6854 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1125 14:55:30.697622 6854 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1125 14:55:30.697624 6854 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 14:55:30.697627 6854 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF1125 14:55:30.697614 6854 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annota\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:55:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.486610 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.486661 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.486675 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.486695 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.486707 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:31Z","lastTransitionTime":"2025-11-25T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.517695 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.536792 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.561636 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.573517 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd2e860f-53e3-44ad-98a2-c7e06d03d0dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1c02f6a900147229d88be040478ffbd81f6aac846bfd5c10174be1a6fbbb613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c00208608937498741782f5d8da22301bc73fa162a56b87e97492fb210cdb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c00208608937498741782f5d8da22301bc73fa162a56b87e97492fb210cdb19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.587314 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8d24125-6de7-4781-99cf-179bdb0b2924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15ac6c20890546462a0b064f5768c553817a1a00c387aa426381b5b23f7e048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b846fdb2153d4bf8a88b6e2a9cdb2f531905b465447ca0f84de8256e87119034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfda1237eb6fc68c6f44d30364cc22bef01e4b3c3cc12a954d107954b609c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.596323 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.596361 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.596372 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.596393 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.596408 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:31Z","lastTransitionTime":"2025-11-25T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.600328 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.612278 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.623005 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.634366 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.646132 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.657599 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.669439 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.685397 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.699273 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.699354 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.699373 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.699401 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.699422 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:31Z","lastTransitionTime":"2025-11-25T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.700417 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.714673 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.730124 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://471b56a9f37dafa266c74c8ad94d25f2ba08b9c1f907a69ba186a3ff2b1517d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:55:15Z\\\",\\\"message\\\":\\\"2025-11-25T14:54:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816\\\\n2025-11-25T14:54:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816 to /host/opt/cni/bin/\\\\n2025-11-25T14:54:29Z [verbose] multus-daemon started\\\\n2025-11-25T14:54:29Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:55:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:31Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.802923 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.802996 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.803009 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.803030 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.803043 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:31Z","lastTransitionTime":"2025-11-25T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.905803 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.905872 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.905885 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.905906 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:31 crc kubenswrapper[4731]: I1125 14:55:31.905919 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:31Z","lastTransitionTime":"2025-11-25T14:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.009740 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.009802 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.009833 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.009853 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.009865 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:32Z","lastTransitionTime":"2025-11-25T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.113350 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.113446 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.113471 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.113537 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.113563 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:32Z","lastTransitionTime":"2025-11-25T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.217542 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.217644 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.217672 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.217710 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.217735 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:32Z","lastTransitionTime":"2025-11-25T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.320936 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.320994 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.321007 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.321029 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.321045 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:32Z","lastTransitionTime":"2025-11-25T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.423892 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.423936 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.423946 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.423965 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.423978 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:32Z","lastTransitionTime":"2025-11-25T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.439979 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovnkube-controller/3.log" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.443696 4731 scope.go:117] "RemoveContainer" containerID="fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6" Nov 25 14:55:32 crc kubenswrapper[4731]: E1125 14:55:32.443890 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.461717 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.484931 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.500351 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.515921 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.526670 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.526728 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.526743 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.526766 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.526780 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:32Z","lastTransitionTime":"2025-11-25T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.530567 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd2e860f-53e3-44ad-98a2-c7e06d03d0dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1c02f6a900147229d88be040478ffbd81f6aac846bfd5c10174be1a6fbbb613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c00208608937498741782f5d8da22301bc73fa162a56b87e97492fb210cdb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c00208608937498741782f5d8da22301bc73fa162a56b87e97492fb210cdb19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.544692 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8d24125-6de7-4781-99cf-179bdb0b2924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15ac6c20890546462a0b064f5768c553817a1a00c387aa426381b5b23f7e048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b846fdb2153d4bf8a88b6e2a9cdb2f531905b465447ca0f84de8256e87119034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfda1237eb6fc68c6f44d30364cc22bef01e4b3c3cc12a954d107954b609c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.557117 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.572846 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.593845 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.610869 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://471b56a9f37dafa266c74c8ad94d25f2ba08b9c1f907a69ba186a3ff2b1517d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:55:15Z\\\",\\\"message\\\":\\\"2025-11-25T14:54:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816\\\\n2025-11-25T14:54:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816 to /host/opt/cni/bin/\\\\n2025-11-25T14:54:29Z [verbose] multus-daemon started\\\\n2025-11-25T14:54:29Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:55:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.624043 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.629750 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.629799 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.629810 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.629850 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.629865 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:32Z","lastTransitionTime":"2025-11-25T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.639667 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.664925 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.677871 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.697664 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:55:30Z\\\",\\\"message\\\":\\\"134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1125 14:55:30.697617 6854 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1125 14:55:30.697618 6854 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1125 14:55:30.697622 6854 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1125 14:55:30.697624 6854 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 14:55:30.697627 6854 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF1125 14:55:30.697614 6854 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annota\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:55:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.715362 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.729181 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.731921 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.731972 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.731984 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.732002 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.732014 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:32Z","lastTransitionTime":"2025-11-25T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.742067 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.811447 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.811500 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:32 crc kubenswrapper[4731]: E1125 14:55:32.811593 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.811642 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:32 crc kubenswrapper[4731]: E1125 14:55:32.811738 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.811461 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:32 crc kubenswrapper[4731]: E1125 14:55:32.811845 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:32 crc kubenswrapper[4731]: E1125 14:55:32.811904 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.825369 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.834124 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.834389 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.834453 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.834519 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.834584 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:32Z","lastTransitionTime":"2025-11-25T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.838569 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.852035 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.866953 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://471b56a9f37dafa266c74c8ad94d25f2ba08b9c1f907a69ba186a3ff2b1517d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:55:15Z\\\",\\\"message\\\":\\\"2025-11-25T14:54:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816\\\\n2025-11-25T14:54:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816 to /host/opt/cni/bin/\\\\n2025-11-25T14:54:29Z [verbose] multus-daemon started\\\\n2025-11-25T14:54:29Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:55:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.879647 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.890968 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.904131 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.919903 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.937686 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.937722 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.937730 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.937745 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.937754 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:32Z","lastTransitionTime":"2025-11-25T14:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.942954 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:55:30Z\\\",\\\"message\\\":\\\"134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1125 14:55:30.697617 6854 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1125 14:55:30.697618 6854 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1125 14:55:30.697622 6854 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1125 14:55:30.697624 6854 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 14:55:30.697627 6854 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF1125 14:55:30.697614 6854 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annota\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:55:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.958148 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.979239 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:32 crc kubenswrapper[4731]: I1125 14:55:32.991886 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:32Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.007570 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.018987 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.030081 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.040738 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.040806 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.040837 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.040861 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.040875 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:33Z","lastTransitionTime":"2025-11-25T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.040994 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.051096 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd2e860f-53e3-44ad-98a2-c7e06d03d0dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1c02f6a900147229d88be040478ffbd81f6aac846bfd5c10174be1a6fbbb613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c00208608937498741782f5d8da22301bc73fa162a56b87e97492fb210cdb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c00208608937498741782f5d8da22301bc73fa162a56b87e97492fb210cdb19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.062524 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8d24125-6de7-4781-99cf-179bdb0b2924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15ac6c20890546462a0b064f5768c553817a1a00c387aa426381b5b23f7e048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b846fdb2153d4bf8a88b6e2a9cdb2f531905b465447ca0f84de8256e87119034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfda1237eb6fc68c6f44d30364cc22bef01e4b3c3cc12a954d107954b609c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:33Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.144508 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.144561 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.144575 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.144597 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.144610 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:33Z","lastTransitionTime":"2025-11-25T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.246546 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.247045 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.247064 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.247091 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.247114 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:33Z","lastTransitionTime":"2025-11-25T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.350431 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.350502 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.350520 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.350547 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.350570 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:33Z","lastTransitionTime":"2025-11-25T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.452772 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.452810 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.452840 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.452858 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.452872 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:33Z","lastTransitionTime":"2025-11-25T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.555653 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.555715 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.555728 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.555748 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.555763 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:33Z","lastTransitionTime":"2025-11-25T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.658465 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.658531 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.658543 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.658559 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.658570 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:33Z","lastTransitionTime":"2025-11-25T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.761249 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.761297 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.761307 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.761328 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.761341 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:33Z","lastTransitionTime":"2025-11-25T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.864418 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.864473 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.864484 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.864526 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.864536 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:33Z","lastTransitionTime":"2025-11-25T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.967597 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.967639 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.967648 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.967666 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:33 crc kubenswrapper[4731]: I1125 14:55:33.967677 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:33Z","lastTransitionTime":"2025-11-25T14:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.070415 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.070508 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.070530 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.070562 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.070587 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:34Z","lastTransitionTime":"2025-11-25T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.172730 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.172777 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.172789 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.172807 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.172837 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:34Z","lastTransitionTime":"2025-11-25T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.275894 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.275957 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.275974 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.275997 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.276014 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:34Z","lastTransitionTime":"2025-11-25T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.379128 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.379224 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.379247 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.379281 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.379303 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:34Z","lastTransitionTime":"2025-11-25T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.481744 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.481806 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.481850 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.481874 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.481888 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:34Z","lastTransitionTime":"2025-11-25T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.585047 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.585095 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.585106 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.585122 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.585134 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:34Z","lastTransitionTime":"2025-11-25T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.688151 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.688199 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.688210 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.688230 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.688246 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:34Z","lastTransitionTime":"2025-11-25T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.791658 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.791810 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.791846 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.791906 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.791924 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:34Z","lastTransitionTime":"2025-11-25T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.811096 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.811230 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.811243 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.811295 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:34 crc kubenswrapper[4731]: E1125 14:55:34.811368 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:34 crc kubenswrapper[4731]: E1125 14:55:34.811484 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:34 crc kubenswrapper[4731]: E1125 14:55:34.811612 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:34 crc kubenswrapper[4731]: E1125 14:55:34.811677 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.894504 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.894561 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.894574 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.894794 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.894836 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:34Z","lastTransitionTime":"2025-11-25T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.997656 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.997744 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.997762 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.997785 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:34 crc kubenswrapper[4731]: I1125 14:55:34.997800 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:34Z","lastTransitionTime":"2025-11-25T14:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.101691 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.101779 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.101791 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.101809 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.101834 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:35Z","lastTransitionTime":"2025-11-25T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.204891 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.204987 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.205011 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.205047 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.205069 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:35Z","lastTransitionTime":"2025-11-25T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.309138 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.309260 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.309280 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.309306 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.309325 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:35Z","lastTransitionTime":"2025-11-25T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.412968 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.413053 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.413073 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.413100 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.413119 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:35Z","lastTransitionTime":"2025-11-25T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.516034 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.516106 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.516122 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.516152 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.516165 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:35Z","lastTransitionTime":"2025-11-25T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.619792 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.619935 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.619960 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.619991 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.620012 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:35Z","lastTransitionTime":"2025-11-25T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.723965 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.724045 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.724058 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.724083 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.724097 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:35Z","lastTransitionTime":"2025-11-25T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.827172 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.827255 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.827273 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.827306 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.827338 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:35Z","lastTransitionTime":"2025-11-25T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.931312 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.931369 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.931378 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.931397 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:35 crc kubenswrapper[4731]: I1125 14:55:35.931408 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:35Z","lastTransitionTime":"2025-11-25T14:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.034450 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.034511 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.034521 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.034539 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.034551 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:36Z","lastTransitionTime":"2025-11-25T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.138465 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.138535 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.138553 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.138587 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.138634 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:36Z","lastTransitionTime":"2025-11-25T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.241111 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.241161 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.241171 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.241186 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.241200 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:36Z","lastTransitionTime":"2025-11-25T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.344984 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.345073 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.345099 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.345136 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.345164 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:36Z","lastTransitionTime":"2025-11-25T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.447581 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.447684 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.447703 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.447737 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.447763 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:36Z","lastTransitionTime":"2025-11-25T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.550391 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.550445 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.550455 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.550474 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.550487 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:36Z","lastTransitionTime":"2025-11-25T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.652872 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.652946 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.652972 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.653006 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.653032 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:36Z","lastTransitionTime":"2025-11-25T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.756506 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.757795 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.757861 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.757892 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.757914 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:36Z","lastTransitionTime":"2025-11-25T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.811097 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.811170 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:36 crc kubenswrapper[4731]: E1125 14:55:36.811268 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.811170 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.811200 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:36 crc kubenswrapper[4731]: E1125 14:55:36.811368 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:36 crc kubenswrapper[4731]: E1125 14:55:36.811585 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:36 crc kubenswrapper[4731]: E1125 14:55:36.811626 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.861036 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.861096 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.861108 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.861127 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.861139 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:36Z","lastTransitionTime":"2025-11-25T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.964229 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.964281 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.964291 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.964308 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:36 crc kubenswrapper[4731]: I1125 14:55:36.964320 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:36Z","lastTransitionTime":"2025-11-25T14:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.066862 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.066908 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.066921 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.066940 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.066954 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:37Z","lastTransitionTime":"2025-11-25T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.170749 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.170804 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.170831 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.170852 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.170864 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:37Z","lastTransitionTime":"2025-11-25T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.273186 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.273270 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.273284 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.273304 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.273316 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:37Z","lastTransitionTime":"2025-11-25T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.376433 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.376483 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.376495 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.376510 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.376521 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:37Z","lastTransitionTime":"2025-11-25T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.479232 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.479288 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.479303 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.479326 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.479342 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:37Z","lastTransitionTime":"2025-11-25T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.582549 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.582586 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.582596 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.582610 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.582620 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:37Z","lastTransitionTime":"2025-11-25T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.626432 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.626513 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.626528 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.626553 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.626571 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:37Z","lastTransitionTime":"2025-11-25T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:37 crc kubenswrapper[4731]: E1125 14:55:37.641478 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.645941 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.646005 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.646020 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.646038 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.646050 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:37Z","lastTransitionTime":"2025-11-25T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:37 crc kubenswrapper[4731]: E1125 14:55:37.659976 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.663978 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.664095 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.664107 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.664125 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.664137 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:37Z","lastTransitionTime":"2025-11-25T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:37 crc kubenswrapper[4731]: E1125 14:55:37.676156 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.679654 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.679691 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.679703 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.679738 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.679749 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:37Z","lastTransitionTime":"2025-11-25T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:37 crc kubenswrapper[4731]: E1125 14:55:37.692643 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.697865 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.697913 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.697929 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.697952 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.697967 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:37Z","lastTransitionTime":"2025-11-25T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:37 crc kubenswrapper[4731]: E1125 14:55:37.711972 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:37Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:37 crc kubenswrapper[4731]: E1125 14:55:37.712115 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.714202 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.714248 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.714259 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.714282 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.714295 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:37Z","lastTransitionTime":"2025-11-25T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.816391 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.816427 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.816437 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.816449 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.816460 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:37Z","lastTransitionTime":"2025-11-25T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.918302 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.918337 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.918347 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.918362 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:37 crc kubenswrapper[4731]: I1125 14:55:37.918372 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:37Z","lastTransitionTime":"2025-11-25T14:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.020661 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.020714 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.020723 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.020740 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.020750 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:38Z","lastTransitionTime":"2025-11-25T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.128603 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.128685 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.128705 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.128737 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.128756 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:38Z","lastTransitionTime":"2025-11-25T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.232069 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.232161 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.232173 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.232193 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.232208 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:38Z","lastTransitionTime":"2025-11-25T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.334770 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.334803 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.334812 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.334842 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.334852 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:38Z","lastTransitionTime":"2025-11-25T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.437507 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.437555 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.437564 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.437577 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.437665 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:38Z","lastTransitionTime":"2025-11-25T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.540582 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.540625 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.540638 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.540657 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.540669 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:38Z","lastTransitionTime":"2025-11-25T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.643850 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.643891 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.643903 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.643920 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.643934 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:38Z","lastTransitionTime":"2025-11-25T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.746310 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.746547 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.746683 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.746782 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.746915 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:38Z","lastTransitionTime":"2025-11-25T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.810951 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.811067 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.811120 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:38 crc kubenswrapper[4731]: E1125 14:55:38.811153 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.811162 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:38 crc kubenswrapper[4731]: E1125 14:55:38.811349 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:38 crc kubenswrapper[4731]: E1125 14:55:38.811545 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:38 crc kubenswrapper[4731]: E1125 14:55:38.811755 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.849545 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.849610 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.849622 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.849641 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.849653 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:38Z","lastTransitionTime":"2025-11-25T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.952511 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.952846 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.952917 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.952986 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:38 crc kubenswrapper[4731]: I1125 14:55:38.953048 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:38Z","lastTransitionTime":"2025-11-25T14:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.055736 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.055773 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.055782 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.055797 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.055807 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:39Z","lastTransitionTime":"2025-11-25T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.158704 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.158752 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.158764 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.158782 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.158793 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:39Z","lastTransitionTime":"2025-11-25T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.261858 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.261932 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.261956 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.261988 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.262013 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:39Z","lastTransitionTime":"2025-11-25T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.364399 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.364437 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.364445 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.364459 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.364467 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:39Z","lastTransitionTime":"2025-11-25T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.466444 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.466489 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.466502 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.466523 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.466537 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:39Z","lastTransitionTime":"2025-11-25T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.569015 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.569303 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.569412 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.569503 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.569589 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:39Z","lastTransitionTime":"2025-11-25T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.701918 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.701947 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.701955 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.701970 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.701981 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:39Z","lastTransitionTime":"2025-11-25T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.804973 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.805036 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.805050 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.805065 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.805078 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:39Z","lastTransitionTime":"2025-11-25T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.908453 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.908490 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.908500 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.908515 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:39 crc kubenswrapper[4731]: I1125 14:55:39.908525 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:39Z","lastTransitionTime":"2025-11-25T14:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.011343 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.011620 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.011740 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.011857 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.011955 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:40Z","lastTransitionTime":"2025-11-25T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.115469 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.115979 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.116016 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.116040 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.116056 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:40Z","lastTransitionTime":"2025-11-25T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.219227 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.219278 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.219290 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.219307 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.219322 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:40Z","lastTransitionTime":"2025-11-25T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.322403 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.322623 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.322719 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.322792 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.322873 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:40Z","lastTransitionTime":"2025-11-25T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.425379 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.425416 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.425427 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.425445 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.425455 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:40Z","lastTransitionTime":"2025-11-25T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.528510 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.528556 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.528565 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.528590 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.528601 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:40Z","lastTransitionTime":"2025-11-25T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.631348 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.631421 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.631446 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.631477 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.631500 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:40Z","lastTransitionTime":"2025-11-25T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.733180 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.733211 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.733219 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.733234 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.733246 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:40Z","lastTransitionTime":"2025-11-25T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.811080 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.811145 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.811167 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.811254 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:40 crc kubenswrapper[4731]: E1125 14:55:40.811409 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:40 crc kubenswrapper[4731]: E1125 14:55:40.811492 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:40 crc kubenswrapper[4731]: E1125 14:55:40.811559 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:40 crc kubenswrapper[4731]: E1125 14:55:40.811802 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.835676 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.835771 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.835806 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.835852 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.835865 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:40Z","lastTransitionTime":"2025-11-25T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.939279 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.939337 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.939355 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.939378 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:40 crc kubenswrapper[4731]: I1125 14:55:40.939397 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:40Z","lastTransitionTime":"2025-11-25T14:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.042358 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.042410 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.042425 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.042447 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.042460 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:41Z","lastTransitionTime":"2025-11-25T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.144883 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.144962 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.144983 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.145013 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.145032 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:41Z","lastTransitionTime":"2025-11-25T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.249999 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.250272 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.250363 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.250512 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.250616 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:41Z","lastTransitionTime":"2025-11-25T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.354665 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.354725 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.354740 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.354761 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.354772 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:41Z","lastTransitionTime":"2025-11-25T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.457649 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.457707 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.457721 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.457741 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.458059 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:41Z","lastTransitionTime":"2025-11-25T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.561091 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.561194 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.561226 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.561265 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.561292 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:41Z","lastTransitionTime":"2025-11-25T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.664678 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.665073 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.665209 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.665299 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.665391 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:41Z","lastTransitionTime":"2025-11-25T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.768709 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.768779 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.768797 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.768849 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.768904 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:41Z","lastTransitionTime":"2025-11-25T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.830453 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.871184 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.871543 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.871681 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.871800 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.871922 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:41Z","lastTransitionTime":"2025-11-25T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.974953 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.975428 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.975558 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.975652 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:41 crc kubenswrapper[4731]: I1125 14:55:41.975779 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:41Z","lastTransitionTime":"2025-11-25T14:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.079108 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.079168 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.079184 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.079205 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.079221 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:42Z","lastTransitionTime":"2025-11-25T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.181756 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.181808 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.181841 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.181863 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.181877 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:42Z","lastTransitionTime":"2025-11-25T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.285167 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.285229 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.285240 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.285263 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.285275 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:42Z","lastTransitionTime":"2025-11-25T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.387989 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.388030 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.388039 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.388054 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.388064 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:42Z","lastTransitionTime":"2025-11-25T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.491147 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.491241 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.491265 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.491295 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.491315 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:42Z","lastTransitionTime":"2025-11-25T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.594725 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.594812 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.594867 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.594896 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.594922 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:42Z","lastTransitionTime":"2025-11-25T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.698166 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.698221 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.698238 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.698266 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.698286 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:42Z","lastTransitionTime":"2025-11-25T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.801865 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.801941 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.801963 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.801997 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.802019 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:42Z","lastTransitionTime":"2025-11-25T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.811336 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.811402 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:42 crc kubenswrapper[4731]: E1125 14:55:42.811536 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.811348 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.811603 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:42 crc kubenswrapper[4731]: E1125 14:55:42.811784 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:42 crc kubenswrapper[4731]: E1125 14:55:42.811969 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:42 crc kubenswrapper[4731]: E1125 14:55:42.812096 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.835648 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.861837 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://471b56a9f37dafa266c74c8ad94d25f2ba08b9c1f907a69ba186a3ff2b1517d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:55:15Z\\\",\\\"message\\\":\\\"2025-11-25T14:54:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816\\\\n2025-11-25T14:54:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816 to /host/opt/cni/bin/\\\\n2025-11-25T14:54:29Z [verbose] multus-daemon started\\\\n2025-11-25T14:54:29Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:55:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.881862 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.903091 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.907872 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.907934 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.907958 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.907992 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.908014 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:42Z","lastTransitionTime":"2025-11-25T14:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.945329 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3c95d4f-bbba-42f1-a696-45b3c2b775d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5368fa0371c00168c1b9b863d630b76f0db01056843f0ea4f29bac38e750577a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d7a8745b89386e55eb513c44cee3d0c23e71fcc91be84b9fbe06d9c09bdc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa3b4b17390093143d8349cb9c4223e90368eaafaebd0093147b02ee4811a825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a12dbc3c71d2db1d01d1fc3795bb579c9b6684d80795645370b67e2716a4c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595ca311cab164e4f036721f9d91c16837042f77ce78f858020c2cbc5bc5a9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710b7b753496ce77eee2b2f7bfd01eb64e39c698b8674398ce8dd2c357f2fcc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://710b7b753496ce77eee2b2f7bfd01eb64e39c698b8674398ce8dd2c357f2fcc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80d04996b1f8ec5b41a261cbd2a477bde7de5a76c6388944c786299f2eaae93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c80d04996b1f8ec5b41a261cbd2a477bde7de5a76c6388944c786299f2eaae93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0f0947845befe4d07072cbe20e8c78fd51e25ad307aff74552e1f1eb53d842fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0947845befe4d07072cbe20e8c78fd51e25ad307aff74552e1f1eb53d842fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.968754 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:42 crc kubenswrapper[4731]: I1125 14:55:42.990147 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:42Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.007128 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.010400 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.010455 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.010467 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.010487 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.010501 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:43Z","lastTransitionTime":"2025-11-25T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.022580 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.044380 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:55:30Z\\\",\\\"message\\\":\\\"134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1125 14:55:30.697617 6854 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1125 14:55:30.697618 6854 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1125 14:55:30.697622 6854 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1125 14:55:30.697624 6854 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 14:55:30.697627 6854 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF1125 14:55:30.697614 6854 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annota\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:55:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.059205 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.069530 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.090077 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.103030 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.113007 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.113074 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.113083 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.113100 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.113134 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:43Z","lastTransitionTime":"2025-11-25T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.115072 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.126192 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd2e860f-53e3-44ad-98a2-c7e06d03d0dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1c02f6a900147229d88be040478ffbd81f6aac846bfd5c10174be1a6fbbb613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c00208608937498741782f5d8da22301bc73fa162a56b87e97492fb210cdb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c00208608937498741782f5d8da22301bc73fa162a56b87e97492fb210cdb19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.138785 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8d24125-6de7-4781-99cf-179bdb0b2924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15ac6c20890546462a0b064f5768c553817a1a00c387aa426381b5b23f7e048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b846fdb2153d4bf8a88b6e2a9cdb2f531905b465447ca0f84de8256e87119034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfda1237eb6fc68c6f44d30364cc22bef01e4b3c3cc12a954d107954b609c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.151661 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.164187 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:43Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.216898 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.216947 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.216957 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.216972 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.216984 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:43Z","lastTransitionTime":"2025-11-25T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.318837 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.318874 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.318884 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.318899 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.318911 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:43Z","lastTransitionTime":"2025-11-25T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.421736 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.421764 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.421773 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.421787 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.421798 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:43Z","lastTransitionTime":"2025-11-25T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.524654 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.524713 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.524730 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.524755 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.524779 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:43Z","lastTransitionTime":"2025-11-25T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.627627 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.627675 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.627690 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.627711 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.627723 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:43Z","lastTransitionTime":"2025-11-25T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.730354 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.730425 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.730449 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.730478 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.730499 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:43Z","lastTransitionTime":"2025-11-25T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.832581 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.832638 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.832657 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.832683 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.832703 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:43Z","lastTransitionTime":"2025-11-25T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.935903 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.936443 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.936598 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.936761 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:43 crc kubenswrapper[4731]: I1125 14:55:43.936949 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:43Z","lastTransitionTime":"2025-11-25T14:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.040915 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.040982 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.041000 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.041028 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.041046 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:44Z","lastTransitionTime":"2025-11-25T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.143971 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.144043 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.144059 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.144077 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.144090 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:44Z","lastTransitionTime":"2025-11-25T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.247135 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.247204 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.247220 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.247242 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.247259 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:44Z","lastTransitionTime":"2025-11-25T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.350101 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.350143 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.350158 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.350177 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.350190 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:44Z","lastTransitionTime":"2025-11-25T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.453010 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.453063 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.453073 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.453142 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.453157 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:44Z","lastTransitionTime":"2025-11-25T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.562344 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.562668 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.562797 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.562967 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.563085 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:44Z","lastTransitionTime":"2025-11-25T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.666057 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.666450 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.666589 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.666731 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.666928 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:44Z","lastTransitionTime":"2025-11-25T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.769999 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.770062 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.770089 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.770122 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.770146 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:44Z","lastTransitionTime":"2025-11-25T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.810743 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.810743 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.810970 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:44 crc kubenswrapper[4731]: E1125 14:55:44.811908 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:44 crc kubenswrapper[4731]: E1125 14:55:44.812061 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:44 crc kubenswrapper[4731]: E1125 14:55:44.812220 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.812637 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:44 crc kubenswrapper[4731]: E1125 14:55:44.812941 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.873606 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.874193 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.874488 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.874690 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.874902 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:44Z","lastTransitionTime":"2025-11-25T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.978585 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.979019 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.979142 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.979283 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:44 crc kubenswrapper[4731]: I1125 14:55:44.979404 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:44Z","lastTransitionTime":"2025-11-25T14:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.082595 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.082649 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.082666 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.082689 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.082704 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:45Z","lastTransitionTime":"2025-11-25T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.084935 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs\") pod \"network-metrics-daemon-7xvpg\" (UID: \"12559634-427c-4ce7-b9e8-d2dd37299642\") " pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:45 crc kubenswrapper[4731]: E1125 14:55:45.085306 4731 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:55:45 crc kubenswrapper[4731]: E1125 14:55:45.085481 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs podName:12559634-427c-4ce7-b9e8-d2dd37299642 nodeName:}" failed. No retries permitted until 2025-11-25 14:56:49.08545919 +0000 UTC m=+167.832593630 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs") pod "network-metrics-daemon-7xvpg" (UID: "12559634-427c-4ce7-b9e8-d2dd37299642") : object "openshift-multus"/"metrics-daemon-secret" not registered Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.185497 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.185588 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.185613 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.185644 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.185670 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:45Z","lastTransitionTime":"2025-11-25T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.289379 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.289445 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.289457 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.289476 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.289489 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:45Z","lastTransitionTime":"2025-11-25T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.392163 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.392225 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.392245 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.392269 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.392288 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:45Z","lastTransitionTime":"2025-11-25T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.495246 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.495298 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.495311 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.495332 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.495350 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:45Z","lastTransitionTime":"2025-11-25T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.598757 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.598813 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.598848 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.598871 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.598900 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:45Z","lastTransitionTime":"2025-11-25T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.703229 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.703314 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.703339 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.703367 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.703384 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:45Z","lastTransitionTime":"2025-11-25T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.807206 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.807253 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.807263 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.807278 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.807288 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:45Z","lastTransitionTime":"2025-11-25T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.911101 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.911157 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.911170 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.911190 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:45 crc kubenswrapper[4731]: I1125 14:55:45.911203 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:45Z","lastTransitionTime":"2025-11-25T14:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.013461 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.013508 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.013523 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.013541 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.013551 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:46Z","lastTransitionTime":"2025-11-25T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.117791 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.117883 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.117896 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.117915 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.117929 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:46Z","lastTransitionTime":"2025-11-25T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.222255 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.222373 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.222402 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.222436 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.222457 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:46Z","lastTransitionTime":"2025-11-25T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.325761 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.325852 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.325870 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.325896 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.325914 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:46Z","lastTransitionTime":"2025-11-25T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.428595 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.428657 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.428673 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.428695 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.428713 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:46Z","lastTransitionTime":"2025-11-25T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.531777 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.531843 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.531864 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.531884 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.531899 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:46Z","lastTransitionTime":"2025-11-25T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.635122 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.635194 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.635215 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.635253 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.635282 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:46Z","lastTransitionTime":"2025-11-25T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.738122 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.738624 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.738652 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.738675 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.738690 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:46Z","lastTransitionTime":"2025-11-25T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.811119 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.811223 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.811239 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:46 crc kubenswrapper[4731]: E1125 14:55:46.811292 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:46 crc kubenswrapper[4731]: E1125 14:55:46.811419 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.811497 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:46 crc kubenswrapper[4731]: E1125 14:55:46.811563 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:46 crc kubenswrapper[4731]: E1125 14:55:46.811617 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.841450 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.841513 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.841527 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.841558 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.841572 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:46Z","lastTransitionTime":"2025-11-25T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.944049 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.944088 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.944097 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.944112 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:46 crc kubenswrapper[4731]: I1125 14:55:46.944121 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:46Z","lastTransitionTime":"2025-11-25T14:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.046763 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.046799 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.046807 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.046837 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.046867 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:47Z","lastTransitionTime":"2025-11-25T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.154721 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.154766 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.154788 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.154807 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.154840 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:47Z","lastTransitionTime":"2025-11-25T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.259318 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.259363 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.259372 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.259386 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.259396 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:47Z","lastTransitionTime":"2025-11-25T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.361892 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.361945 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.361954 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.361972 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.361983 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:47Z","lastTransitionTime":"2025-11-25T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.464635 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.464683 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.464693 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.464710 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.464723 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:47Z","lastTransitionTime":"2025-11-25T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.567346 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.567445 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.567464 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.567493 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.567519 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:47Z","lastTransitionTime":"2025-11-25T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.670636 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.670700 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.670723 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.670752 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.670774 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:47Z","lastTransitionTime":"2025-11-25T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.773477 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.773542 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.773558 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.773585 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.773601 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:47Z","lastTransitionTime":"2025-11-25T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.812195 4731 scope.go:117] "RemoveContainer" containerID="fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6" Nov 25 14:55:47 crc kubenswrapper[4731]: E1125 14:55:47.812550 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.876603 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.876675 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.876689 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.876710 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.876725 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:47Z","lastTransitionTime":"2025-11-25T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.979526 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.979607 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.979616 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.979631 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:47 crc kubenswrapper[4731]: I1125 14:55:47.979641 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:47Z","lastTransitionTime":"2025-11-25T14:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.014257 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.014314 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.014329 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.014350 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.014397 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:48Z","lastTransitionTime":"2025-11-25T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:48 crc kubenswrapper[4731]: E1125 14:55:48.029561 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.034628 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.034680 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.034693 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.034711 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.034724 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:48Z","lastTransitionTime":"2025-11-25T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:48 crc kubenswrapper[4731]: E1125 14:55:48.050570 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.055269 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.055314 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.055329 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.055348 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.055359 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:48Z","lastTransitionTime":"2025-11-25T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:48 crc kubenswrapper[4731]: E1125 14:55:48.074492 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.078850 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.078888 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.078896 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.078913 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.078939 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:48Z","lastTransitionTime":"2025-11-25T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:48 crc kubenswrapper[4731]: E1125 14:55:48.093423 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.098554 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.098622 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.098634 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.098652 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.098664 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:48Z","lastTransitionTime":"2025-11-25T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:48 crc kubenswrapper[4731]: E1125 14:55:48.112008 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404564Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865364Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7c7fdaeb-c6a4-441c-872f-17222039636c\\\",\\\"systemUUID\\\":\\\"1f00ad82-feaa-4329-a6d9-a9af68b2a156\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:48Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:48 crc kubenswrapper[4731]: E1125 14:55:48.112511 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.114072 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.114177 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.114247 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.114322 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.114401 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:48Z","lastTransitionTime":"2025-11-25T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.217406 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.217461 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.217478 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.217502 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.217521 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:48Z","lastTransitionTime":"2025-11-25T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.321009 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.321070 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.321088 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.321117 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.321135 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:48Z","lastTransitionTime":"2025-11-25T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.424859 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.424911 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.424923 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.424944 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.424957 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:48Z","lastTransitionTime":"2025-11-25T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.527212 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.527278 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.527291 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.527310 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.527323 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:48Z","lastTransitionTime":"2025-11-25T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.629225 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.629255 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.629263 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.629276 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.629285 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:48Z","lastTransitionTime":"2025-11-25T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.732192 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.732238 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.732251 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.732268 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.732282 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:48Z","lastTransitionTime":"2025-11-25T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.810712 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.810801 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.810873 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.810988 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:48 crc kubenswrapper[4731]: E1125 14:55:48.811052 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:48 crc kubenswrapper[4731]: E1125 14:55:48.811160 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:48 crc kubenswrapper[4731]: E1125 14:55:48.811302 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:48 crc kubenswrapper[4731]: E1125 14:55:48.811405 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.834444 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.834799 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.834941 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.835034 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.835108 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:48Z","lastTransitionTime":"2025-11-25T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.938341 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.938383 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.938393 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.938410 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:48 crc kubenswrapper[4731]: I1125 14:55:48.938421 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:48Z","lastTransitionTime":"2025-11-25T14:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.042156 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.042214 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.042231 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.042255 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.042272 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:49Z","lastTransitionTime":"2025-11-25T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.145518 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.145843 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.145950 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.146044 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.146124 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:49Z","lastTransitionTime":"2025-11-25T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.248969 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.249017 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.249031 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.249047 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.249059 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:49Z","lastTransitionTime":"2025-11-25T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.351184 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.351230 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.351244 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.351262 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.351276 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:49Z","lastTransitionTime":"2025-11-25T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.454080 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.454946 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.454986 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.455007 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.455018 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:49Z","lastTransitionTime":"2025-11-25T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.558153 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.558212 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.558229 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.558253 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.558269 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:49Z","lastTransitionTime":"2025-11-25T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.661398 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.661441 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.661450 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.661468 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.661480 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:49Z","lastTransitionTime":"2025-11-25T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.764208 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.764295 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.764310 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.764334 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.764347 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:49Z","lastTransitionTime":"2025-11-25T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.866920 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.866967 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.866980 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.866998 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.867010 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:49Z","lastTransitionTime":"2025-11-25T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.969040 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.969085 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.969111 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.969131 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:49 crc kubenswrapper[4731]: I1125 14:55:49.969141 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:49Z","lastTransitionTime":"2025-11-25T14:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.071975 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.072019 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.072028 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.072045 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.072055 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:50Z","lastTransitionTime":"2025-11-25T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.175065 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.175110 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.175118 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.175135 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.175146 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:50Z","lastTransitionTime":"2025-11-25T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.278193 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.278243 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.278258 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.278278 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.278289 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:50Z","lastTransitionTime":"2025-11-25T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.382143 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.382231 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.382271 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.382339 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.382364 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:50Z","lastTransitionTime":"2025-11-25T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.486228 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.486269 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.486292 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.486309 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.486318 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:50Z","lastTransitionTime":"2025-11-25T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.589973 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.590071 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.590096 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.590133 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.590159 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:50Z","lastTransitionTime":"2025-11-25T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.693308 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.693705 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.693888 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.694041 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.694208 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:50Z","lastTransitionTime":"2025-11-25T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.798206 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.798587 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.798717 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.798872 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.799042 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:50Z","lastTransitionTime":"2025-11-25T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.810503 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.810676 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.810666 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.810556 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:50 crc kubenswrapper[4731]: E1125 14:55:50.810898 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:50 crc kubenswrapper[4731]: E1125 14:55:50.810958 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:50 crc kubenswrapper[4731]: E1125 14:55:50.811017 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:50 crc kubenswrapper[4731]: E1125 14:55:50.811068 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.902076 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.902366 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.902501 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.902654 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:50 crc kubenswrapper[4731]: I1125 14:55:50.902755 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:50Z","lastTransitionTime":"2025-11-25T14:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.006135 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.006498 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.006649 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.006791 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.007030 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:51Z","lastTransitionTime":"2025-11-25T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.109668 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.110114 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.110418 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.110630 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.110851 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:51Z","lastTransitionTime":"2025-11-25T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.214468 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.214544 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.214560 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.214585 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.214603 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:51Z","lastTransitionTime":"2025-11-25T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.317537 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.317599 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.317615 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.317638 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.317654 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:51Z","lastTransitionTime":"2025-11-25T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.421436 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.421524 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.421545 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.421575 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.421594 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:51Z","lastTransitionTime":"2025-11-25T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.524277 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.524310 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.524319 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.524337 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.524347 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:51Z","lastTransitionTime":"2025-11-25T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.627191 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.627243 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.627260 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.627281 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.627299 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:51Z","lastTransitionTime":"2025-11-25T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.730384 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.730447 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.730465 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.730490 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.730509 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:51Z","lastTransitionTime":"2025-11-25T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.833737 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.834198 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.834215 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.834238 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.834251 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:51Z","lastTransitionTime":"2025-11-25T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.938586 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.938667 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.938690 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.938722 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:51 crc kubenswrapper[4731]: I1125 14:55:51.938750 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:51Z","lastTransitionTime":"2025-11-25T14:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.041253 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.041306 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.041324 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.041346 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.041362 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:52Z","lastTransitionTime":"2025-11-25T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.145019 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.145487 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.145676 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.145848 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.145981 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:52Z","lastTransitionTime":"2025-11-25T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.249630 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.250087 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.250240 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.250449 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.250587 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:52Z","lastTransitionTime":"2025-11-25T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.354635 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.355112 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.355177 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.355243 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.355312 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:52Z","lastTransitionTime":"2025-11-25T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.458508 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.458901 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.459007 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.459213 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.459321 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:52Z","lastTransitionTime":"2025-11-25T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.562569 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.562664 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.562681 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.562711 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.562728 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:52Z","lastTransitionTime":"2025-11-25T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.665833 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.665870 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.665878 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.665894 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.665905 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:52Z","lastTransitionTime":"2025-11-25T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.769808 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.769906 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.769933 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.769968 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.769989 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:52Z","lastTransitionTime":"2025-11-25T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.811161 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.811261 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:52 crc kubenswrapper[4731]: E1125 14:55:52.811366 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:52 crc kubenswrapper[4731]: E1125 14:55:52.811480 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.811533 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.811570 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:52 crc kubenswrapper[4731]: E1125 14:55:52.811952 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:52 crc kubenswrapper[4731]: E1125 14:55:52.812103 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.831597 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8d24125-6de7-4781-99cf-179bdb0b2924\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d15ac6c20890546462a0b064f5768c553817a1a00c387aa426381b5b23f7e048\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b846fdb2153d4bf8a88b6e2a9cdb2f531905b465447ca0f84de8256e87119034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3cfda1237eb6fc68c6f44d30364cc22bef01e4b3c3cc12a954d107954b609c52\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a5c3d23ead51c8f9d3431224896f351c0ece7794a63823c68d484c5a0534568b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.851774 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://55953523281e6951237259f983b5e737aaa58ca36cb886f1a305706885437b14\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.865067 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ff5ac592aa3e75886da228d1502abc5fc77f1c304d9298f1fcbd58eb429d3c4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.872256 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.872300 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.872309 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.872326 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.872337 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:52Z","lastTransitionTime":"2025-11-25T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.880394 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7wt6q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"700e6303-2fdb-4127-9fd3-b9dbaa412587\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4ebb98046696d882dce131f3a9f9f8f0377a2b5f6816e6b799bf3d5989905046\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nj6wf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:29Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7wt6q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.894041 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12559634-427c-4ce7-b9e8-d2dd37299642\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wrtbw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:41Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-7xvpg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.907498 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd2e860f-53e3-44ad-98a2-c7e06d03d0dc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1c02f6a900147229d88be040478ffbd81f6aac846bfd5c10174be1a6fbbb613\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c00208608937498741782f5d8da22301bc73fa162a56b87e97492fb210cdb19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c00208608937498741782f5d8da22301bc73fa162a56b87e97492fb210cdb19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.928052 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e8368a5d-5e6f-4425-8187-46cd17cca8fd\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-11-25T14:54:25Z\\\",\\\"message\\\":\\\"file observer\\\\nW1125 14:54:24.715198 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1125 14:54:24.715453 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1125 14:54:24.716458 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-951452222/tls.crt::/tmp/serving-cert-951452222/tls.key\\\\\\\"\\\\nI1125 14:54:25.058556 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1125 14:54:25.062507 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1125 14:54:25.063154 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1125 14:54:25.063180 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1125 14:54:25.063187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1125 14:54:25.082201 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1125 14:54:25.082932 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1125 14:54:25.082944 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1125 14:54:25.082950 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1125 14:54:25.082953 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1125 14:54:25.082956 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI1125 14:54:25.082616 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF1125 14:54:25.085316 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.948596 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.972259 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce07864ede5479f515f300b483ad1e992423008fcc67a0ef02fbf275e571231a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4f6441a647916cc1a0c66416de40d71d637400c9727be976bd3c64b915d1821\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.974125 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.974181 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.974198 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.974221 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.974236 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:52Z","lastTransitionTime":"2025-11-25T14:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:52 crc kubenswrapper[4731]: I1125 14:55:52.992793 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:52Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.009031 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xx4w" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"062d0f18-096f-4f99-9faa-74dbd454bcd0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:55:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://471b56a9f37dafa266c74c8ad94d25f2ba08b9c1f907a69ba186a3ff2b1517d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:55:15Z\\\",\\\"message\\\":\\\"2025-11-25T14:54:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816\\\\n2025-11-25T14:54:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_11b0b059-7d4f-4347-96eb-ae0b8b39f816 to /host/opt/cni/bin/\\\\n2025-11-25T14:54:29Z [verbose] multus-daemon started\\\\n2025-11-25T14:54:29Z [verbose] Readiness Indicator file check\\\\n2025-11-25T14:55:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zl9sh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xx4w\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:53Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.024066 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f469eb10-0aba-4dba-a237-899e0b8dabca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d0bf7d805b9498a5ca046c38c62431c550c4489f08e59ff720240f71a3c59a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ztxkf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-f6xlz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:53Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.039055 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f7790405-0cce-4841-8f4f-328933f7be5e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8c0efad47eb279d5b3057da95545357616d09ec78020051f9b91a1a98673e014\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://df8881d7e3b8218f4c9a0bffbfd28c361255ce7328e3fd1374385f1d1f10d5b9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gc2sw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:39Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-672jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:53Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.063573 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f3c95d4f-bbba-42f1-a696-45b3c2b775d2\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5368fa0371c00168c1b9b863d630b76f0db01056843f0ea4f29bac38e750577a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d7a8745b89386e55eb513c44cee3d0c23e71fcc91be84b9fbe06d9c09bdc48\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa3b4b17390093143d8349cb9c4223e90368eaafaebd0093147b02ee4811a825\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a12dbc3c71d2db1d01d1fc3795bb579c9b6684d80795645370b67e2716a4c44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://595ca311cab164e4f036721f9d91c16837042f77ce78f858020c2cbc5bc5a9fa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710b7b753496ce77eee2b2f7bfd01eb64e39c698b8674398ce8dd2c357f2fcc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://710b7b753496ce77eee2b2f7bfd01eb64e39c698b8674398ce8dd2c357f2fcc6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c80d04996b1f8ec5b41a261cbd2a477bde7de5a76c6388944c786299f2eaae93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c80d04996b1f8ec5b41a261cbd2a477bde7de5a76c6388944c786299f2eaae93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:06Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0f0947845befe4d07072cbe20e8c78fd51e25ad307aff74552e1f1eb53d842fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0f0947845befe4d07072cbe20e8c78fd51e25ad307aff74552e1f1eb53d842fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:53Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.077118 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.077174 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.077190 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.077210 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.077223 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:53Z","lastTransitionTime":"2025-11-25T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.085382 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fa0e665-3cc9-4bb9-9758-75bb8db39513\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-11-25T14:55:30Z\\\",\\\"message\\\":\\\"134] Ensuring zone local for Pod openshift-network-node-identity/network-node-identity-vrzqb in node crc\\\\nI1125 14:55:30.697617 6854 ovn.go:134] Ensuring zone local for Pod openshift-kube-apiserver/kube-apiserver-crc in node crc\\\\nI1125 14:55:30.697618 6854 obj_retry.go:386] Retry successful for *v1.Pod openshift-network-node-identity/network-node-identity-vrzqb after 0 failed attempt(s)\\\\nI1125 14:55:30.697622 6854 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI1125 14:55:30.697624 6854 default_network_controller.go:776] Recording success event on pod openshift-network-node-identity/network-node-identity-vrzqb\\\\nI1125 14:55:30.697627 6854 default_network_controller.go:776] Recording success event on pod openshift-kube-apiserver/kube-apiserver-crc\\\\nF1125 14:55:30.697614 6854 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annota\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-11-25T14:55:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xdcdt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-hc8cx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:53Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.099934 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"65fddf5b-25cd-485f-bd6f-7ad573d07b4a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://34b5b9ecb1b7f2ceac51b9f91d7bc2170c4f5f02ba1f8138034ced962a34e148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9bfbf2590742e3d60e1274d15601b5edc0fcda4751499b65edb0e0388e2d8493\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c024be4785b4f898d7d47baa55cee8987bcf0e89c2f636e6713075e84d2ad2a5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:03Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:53Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.111908 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-tlrbx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"032c7641-9dfb-4b82-8bfa-b82182de9c9c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ab798ee15052bff236809182fe3979dffa6c04c6aff9a71f8b29c1d6ec1e406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rf54q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:26Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-tlrbx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:53Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.129867 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5c10b238-57f1-4168-9c7c-1cfb0ca40525\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e4681dde0cde77711f8bff00bb4e24c286c43350a07d29f86cefbf083e05fad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-11-25T14:54:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://502efda6d87b0104e393148f7f21d565ca9cb5adf7e555b09c7ce7486a14c7b8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8026378bf4bf26cefad6fff5f79c58ff1f810bfdb80728aa918ef4ec247a7cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a8754566142b638b2c9ce2ee9d89a857cf5d8bab59ba7943210453cf8a555411\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4009ae120cfd1072b8dedd4ad681a0fb5a8282597fca553e7b5a65f72d8cb3a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://027a9410d1b4ae0dad0604c6391bd941677826e9725e247a416df96d2f96c8d4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3d9d01efd07d5b94d5e923de5477d4d6c019781fb92c95ebd47bbc13f5299c7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-11-25T14:54:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-11-25T14:54:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2fpl2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-11-25T14:54:27Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-c2xb5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:53Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.147319 4731 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-11-25T14:54:24Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-11-25T14:55:53Z is after 2025-08-24T17:21:41Z" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.180337 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.180594 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.180961 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.181187 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.181411 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:53Z","lastTransitionTime":"2025-11-25T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.284399 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.284457 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.284466 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.284485 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.284494 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:53Z","lastTransitionTime":"2025-11-25T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.390673 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.390731 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.390749 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.390772 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.390788 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:53Z","lastTransitionTime":"2025-11-25T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.493240 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.493591 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.493723 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.493839 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.493926 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:53Z","lastTransitionTime":"2025-11-25T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.596290 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.597388 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.597625 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.597873 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.598058 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:53Z","lastTransitionTime":"2025-11-25T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.700637 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.701138 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.701199 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.701226 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.701244 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:53Z","lastTransitionTime":"2025-11-25T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.803398 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.803516 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.803530 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.803547 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.803558 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:53Z","lastTransitionTime":"2025-11-25T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.906576 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.906622 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.906634 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.906655 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:53 crc kubenswrapper[4731]: I1125 14:55:53.906668 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:53Z","lastTransitionTime":"2025-11-25T14:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.008454 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.008490 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.008502 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.008518 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.008527 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:54Z","lastTransitionTime":"2025-11-25T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.111088 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.111123 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.111131 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.111148 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.111157 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:54Z","lastTransitionTime":"2025-11-25T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.214397 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.214439 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.214448 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.214462 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.214472 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:54Z","lastTransitionTime":"2025-11-25T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.317259 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.317612 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.317720 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.317873 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.318056 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:54Z","lastTransitionTime":"2025-11-25T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.421304 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.421345 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.421354 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.421370 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.421382 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:54Z","lastTransitionTime":"2025-11-25T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.524092 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.524150 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.524173 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.524204 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.524227 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:54Z","lastTransitionTime":"2025-11-25T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.626856 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.626935 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.626953 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.626978 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.627003 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:54Z","lastTransitionTime":"2025-11-25T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.730091 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.730165 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.730186 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.730218 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.730240 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:54Z","lastTransitionTime":"2025-11-25T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.810502 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.810638 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.810569 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.810861 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:54 crc kubenswrapper[4731]: E1125 14:55:54.810940 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:54 crc kubenswrapper[4731]: E1125 14:55:54.811004 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:54 crc kubenswrapper[4731]: E1125 14:55:54.811075 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:54 crc kubenswrapper[4731]: E1125 14:55:54.811179 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.832901 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.832971 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.832994 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.833021 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.833041 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:54Z","lastTransitionTime":"2025-11-25T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.936153 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.936201 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.936211 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.936226 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:54 crc kubenswrapper[4731]: I1125 14:55:54.936236 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:54Z","lastTransitionTime":"2025-11-25T14:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.039016 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.039087 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.039110 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.039139 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.039160 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:55Z","lastTransitionTime":"2025-11-25T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.142018 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.142050 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.142058 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.142072 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.142082 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:55Z","lastTransitionTime":"2025-11-25T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.245449 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.245490 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.245503 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.245518 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.245527 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:55Z","lastTransitionTime":"2025-11-25T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.348767 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.348960 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.348992 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.349014 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.349027 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:55Z","lastTransitionTime":"2025-11-25T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.451962 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.452042 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.452066 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.452107 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.452133 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:55Z","lastTransitionTime":"2025-11-25T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.554839 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.554917 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.554929 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.554976 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.554997 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:55Z","lastTransitionTime":"2025-11-25T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.658471 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.658529 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.658540 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.658560 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.658573 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:55Z","lastTransitionTime":"2025-11-25T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.761775 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.761838 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.761851 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.761872 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.761891 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:55Z","lastTransitionTime":"2025-11-25T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.865859 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.865950 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.865970 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.866004 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.866026 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:55Z","lastTransitionTime":"2025-11-25T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.970112 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.970190 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.970208 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.970235 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:55 crc kubenswrapper[4731]: I1125 14:55:55.970258 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:55Z","lastTransitionTime":"2025-11-25T14:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.072673 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.072713 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.072720 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.072734 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.072742 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:56Z","lastTransitionTime":"2025-11-25T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.176735 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.181785 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.181891 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.181975 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.181999 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:56Z","lastTransitionTime":"2025-11-25T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.285734 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.285790 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.285806 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.285857 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.285873 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:56Z","lastTransitionTime":"2025-11-25T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.388007 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.388068 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.388080 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.388099 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.388116 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:56Z","lastTransitionTime":"2025-11-25T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.492054 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.492112 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.492127 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.492148 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.492163 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:56Z","lastTransitionTime":"2025-11-25T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.596200 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.596258 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.596274 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.596297 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.596312 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:56Z","lastTransitionTime":"2025-11-25T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.699882 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.700042 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.700106 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.700137 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.700156 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:56Z","lastTransitionTime":"2025-11-25T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.804537 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.804584 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.804596 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.804616 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.804628 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:56Z","lastTransitionTime":"2025-11-25T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.810901 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:56 crc kubenswrapper[4731]: E1125 14:55:56.811086 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.811158 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.811170 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.811232 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:56 crc kubenswrapper[4731]: E1125 14:55:56.811281 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:56 crc kubenswrapper[4731]: E1125 14:55:56.811357 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:56 crc kubenswrapper[4731]: E1125 14:55:56.811593 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.906649 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.906691 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.906701 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.906715 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:56 crc kubenswrapper[4731]: I1125 14:55:56.906727 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:56Z","lastTransitionTime":"2025-11-25T14:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.010004 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.010414 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.010554 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.010723 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.010949 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:57Z","lastTransitionTime":"2025-11-25T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.113742 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.113807 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.113848 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.113868 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.113881 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:57Z","lastTransitionTime":"2025-11-25T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.216638 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.217147 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.217296 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.217465 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.217601 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:57Z","lastTransitionTime":"2025-11-25T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.320474 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.320540 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.320558 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.320585 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.320603 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:57Z","lastTransitionTime":"2025-11-25T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.423323 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.423402 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.423421 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.423447 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.423467 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:57Z","lastTransitionTime":"2025-11-25T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.526683 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.526756 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.526775 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.526803 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.526862 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:57Z","lastTransitionTime":"2025-11-25T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.629753 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.629813 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.629862 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.629894 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.629913 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:57Z","lastTransitionTime":"2025-11-25T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.733230 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.733264 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.733275 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.733289 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.733297 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:57Z","lastTransitionTime":"2025-11-25T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.835713 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.835767 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.835782 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.835802 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.835835 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:57Z","lastTransitionTime":"2025-11-25T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.940430 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.940524 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.940540 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.940592 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:57 crc kubenswrapper[4731]: I1125 14:55:57.940606 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:57Z","lastTransitionTime":"2025-11-25T14:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.043705 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.043749 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.043760 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.043776 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.043787 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:58Z","lastTransitionTime":"2025-11-25T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.129280 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.129336 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.129356 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.129375 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.129388 4731 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-25T14:55:58Z","lastTransitionTime":"2025-11-25T14:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.193391 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4"] Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.193774 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.195994 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.196027 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.196213 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.196248 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.248992 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-tlrbx" podStartSLOduration=92.248965483 podStartE2EDuration="1m32.248965483s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:58.231442941 +0000 UTC m=+116.978577391" watchObservedRunningTime="2025-11-25 14:55:58.248965483 +0000 UTC m=+116.996099933" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.249215 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-c2xb5" podStartSLOduration=92.249209339 podStartE2EDuration="1m32.249209339s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:58.246614264 +0000 UTC m=+116.993748704" watchObservedRunningTime="2025-11-25 14:55:58.249209339 +0000 UTC m=+116.996343779" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.261596 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7wt6q" podStartSLOduration=92.261569061 podStartE2EDuration="1m32.261569061s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:58.261001857 +0000 UTC m=+117.008136287" watchObservedRunningTime="2025-11-25 14:55:58.261569061 +0000 UTC m=+117.008703501" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.307071 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=29.307043708 podStartE2EDuration="29.307043708s" podCreationTimestamp="2025-11-25 14:55:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:58.292114792 +0000 UTC m=+117.039249222" watchObservedRunningTime="2025-11-25 14:55:58.307043708 +0000 UTC m=+117.054178138" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.307302 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=63.307297035 podStartE2EDuration="1m3.307297035s" podCreationTimestamp="2025-11-25 14:54:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:58.306772612 +0000 UTC m=+117.053907042" watchObservedRunningTime="2025-11-25 14:55:58.307297035 +0000 UTC m=+117.054431465" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.341743 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff8f7288-cd73-4651-a274-5a08f5ad11d5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8qlt4\" (UID: \"ff8f7288-cd73-4651-a274-5a08f5ad11d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.341834 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff8f7288-cd73-4651-a274-5a08f5ad11d5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8qlt4\" (UID: \"ff8f7288-cd73-4651-a274-5a08f5ad11d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.341922 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ff8f7288-cd73-4651-a274-5a08f5ad11d5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8qlt4\" (UID: \"ff8f7288-cd73-4651-a274-5a08f5ad11d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.341963 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ff8f7288-cd73-4651-a274-5a08f5ad11d5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8qlt4\" (UID: \"ff8f7288-cd73-4651-a274-5a08f5ad11d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.341995 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff8f7288-cd73-4651-a274-5a08f5ad11d5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8qlt4\" (UID: \"ff8f7288-cd73-4651-a274-5a08f5ad11d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.373607 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-8xx4w" podStartSLOduration=92.373574516 podStartE2EDuration="1m32.373574516s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:58.373145965 +0000 UTC m=+117.120280395" watchObservedRunningTime="2025-11-25 14:55:58.373574516 +0000 UTC m=+117.120708956" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.387438 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podStartSLOduration=92.387407595 podStartE2EDuration="1m32.387407595s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:58.386331418 +0000 UTC m=+117.133465878" watchObservedRunningTime="2025-11-25 14:55:58.387407595 +0000 UTC m=+117.134542025" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.433500 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-672jb" podStartSLOduration=91.433471367 podStartE2EDuration="1m31.433471367s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:58.399328946 +0000 UTC m=+117.146463406" watchObservedRunningTime="2025-11-25 14:55:58.433471367 +0000 UTC m=+117.180605837" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.434106 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=17.434101203 podStartE2EDuration="17.434101203s" podCreationTimestamp="2025-11-25 14:55:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:58.433367655 +0000 UTC m=+117.180502095" watchObservedRunningTime="2025-11-25 14:55:58.434101203 +0000 UTC m=+117.181235643" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.443690 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ff8f7288-cd73-4651-a274-5a08f5ad11d5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8qlt4\" (UID: \"ff8f7288-cd73-4651-a274-5a08f5ad11d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.443766 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ff8f7288-cd73-4651-a274-5a08f5ad11d5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8qlt4\" (UID: \"ff8f7288-cd73-4651-a274-5a08f5ad11d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.443793 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff8f7288-cd73-4651-a274-5a08f5ad11d5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8qlt4\" (UID: \"ff8f7288-cd73-4651-a274-5a08f5ad11d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.443839 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ff8f7288-cd73-4651-a274-5a08f5ad11d5-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8qlt4\" (UID: \"ff8f7288-cd73-4651-a274-5a08f5ad11d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.443869 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff8f7288-cd73-4651-a274-5a08f5ad11d5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8qlt4\" (UID: \"ff8f7288-cd73-4651-a274-5a08f5ad11d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.443972 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff8f7288-cd73-4651-a274-5a08f5ad11d5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8qlt4\" (UID: \"ff8f7288-cd73-4651-a274-5a08f5ad11d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.444976 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ff8f7288-cd73-4651-a274-5a08f5ad11d5-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8qlt4\" (UID: \"ff8f7288-cd73-4651-a274-5a08f5ad11d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.445608 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ff8f7288-cd73-4651-a274-5a08f5ad11d5-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8qlt4\" (UID: \"ff8f7288-cd73-4651-a274-5a08f5ad11d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.456358 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff8f7288-cd73-4651-a274-5a08f5ad11d5-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8qlt4\" (UID: \"ff8f7288-cd73-4651-a274-5a08f5ad11d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.462336 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff8f7288-cd73-4651-a274-5a08f5ad11d5-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8qlt4\" (UID: \"ff8f7288-cd73-4651-a274-5a08f5ad11d5\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.470338 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=92.470316207 podStartE2EDuration="1m32.470316207s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:58.452862997 +0000 UTC m=+117.199997437" watchObservedRunningTime="2025-11-25 14:55:58.470316207 +0000 UTC m=+117.217450637" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.514068 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.536786 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=91.536755323 podStartE2EDuration="1m31.536755323s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:58.504961771 +0000 UTC m=+117.252096221" watchObservedRunningTime="2025-11-25 14:55:58.536755323 +0000 UTC m=+117.283889773" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.811589 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:55:58 crc kubenswrapper[4731]: E1125 14:55:58.811766 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.812022 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:55:58 crc kubenswrapper[4731]: E1125 14:55:58.812070 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.812247 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:55:58 crc kubenswrapper[4731]: I1125 14:55:58.812430 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:55:58 crc kubenswrapper[4731]: E1125 14:55:58.812552 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:55:58 crc kubenswrapper[4731]: E1125 14:55:58.812617 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:55:59 crc kubenswrapper[4731]: I1125 14:55:59.539250 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" event={"ID":"ff8f7288-cd73-4651-a274-5a08f5ad11d5","Type":"ContainerStarted","Data":"a3f044e859caed6e182566f079fe1b25b5a71d4baa862921873276a3b766b5bb"} Nov 25 14:55:59 crc kubenswrapper[4731]: I1125 14:55:59.539316 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" event={"ID":"ff8f7288-cd73-4651-a274-5a08f5ad11d5","Type":"ContainerStarted","Data":"a26a0d9c9eddd944ec1e8ca522e6de84a58ee04795c6cccd17358970ba8634e8"} Nov 25 14:55:59 crc kubenswrapper[4731]: I1125 14:55:59.559733 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8qlt4" podStartSLOduration=93.559705303 podStartE2EDuration="1m33.559705303s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:55:59.558712018 +0000 UTC m=+118.305846488" watchObservedRunningTime="2025-11-25 14:55:59.559705303 +0000 UTC m=+118.306839763" Nov 25 14:55:59 crc kubenswrapper[4731]: I1125 14:55:59.813111 4731 scope.go:117] "RemoveContainer" containerID="fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6" Nov 25 14:55:59 crc kubenswrapper[4731]: E1125 14:55:59.813476 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-hc8cx_openshift-ovn-kubernetes(2fa0e665-3cc9-4bb9-9758-75bb8db39513)\"" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" Nov 25 14:56:00 crc kubenswrapper[4731]: I1125 14:56:00.810524 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:56:00 crc kubenswrapper[4731]: I1125 14:56:00.810578 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:56:00 crc kubenswrapper[4731]: I1125 14:56:00.810611 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:56:00 crc kubenswrapper[4731]: I1125 14:56:00.810730 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:56:00 crc kubenswrapper[4731]: E1125 14:56:00.810713 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:56:00 crc kubenswrapper[4731]: E1125 14:56:00.810881 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:56:00 crc kubenswrapper[4731]: E1125 14:56:00.810931 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:56:00 crc kubenswrapper[4731]: E1125 14:56:00.811152 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:56:01 crc kubenswrapper[4731]: I1125 14:56:01.547939 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xx4w_062d0f18-096f-4f99-9faa-74dbd454bcd0/kube-multus/1.log" Nov 25 14:56:01 crc kubenswrapper[4731]: I1125 14:56:01.548583 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xx4w_062d0f18-096f-4f99-9faa-74dbd454bcd0/kube-multus/0.log" Nov 25 14:56:01 crc kubenswrapper[4731]: I1125 14:56:01.548656 4731 generic.go:334] "Generic (PLEG): container finished" podID="062d0f18-096f-4f99-9faa-74dbd454bcd0" containerID="471b56a9f37dafa266c74c8ad94d25f2ba08b9c1f907a69ba186a3ff2b1517d2" exitCode=1 Nov 25 14:56:01 crc kubenswrapper[4731]: I1125 14:56:01.548705 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8xx4w" event={"ID":"062d0f18-096f-4f99-9faa-74dbd454bcd0","Type":"ContainerDied","Data":"471b56a9f37dafa266c74c8ad94d25f2ba08b9c1f907a69ba186a3ff2b1517d2"} Nov 25 14:56:01 crc kubenswrapper[4731]: I1125 14:56:01.548763 4731 scope.go:117] "RemoveContainer" containerID="b8dd6b65ef51188fd15dd128f4ad7bf57a7873bb2a5ca4fbfb6c28fab90342ac" Nov 25 14:56:01 crc kubenswrapper[4731]: I1125 14:56:01.549470 4731 scope.go:117] "RemoveContainer" containerID="471b56a9f37dafa266c74c8ad94d25f2ba08b9c1f907a69ba186a3ff2b1517d2" Nov 25 14:56:01 crc kubenswrapper[4731]: E1125 14:56:01.549788 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-8xx4w_openshift-multus(062d0f18-096f-4f99-9faa-74dbd454bcd0)\"" pod="openshift-multus/multus-8xx4w" podUID="062d0f18-096f-4f99-9faa-74dbd454bcd0" Nov 25 14:56:02 crc kubenswrapper[4731]: I1125 14:56:02.553662 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xx4w_062d0f18-096f-4f99-9faa-74dbd454bcd0/kube-multus/1.log" Nov 25 14:56:02 crc kubenswrapper[4731]: E1125 14:56:02.799047 4731 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Nov 25 14:56:02 crc kubenswrapper[4731]: I1125 14:56:02.811403 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:56:02 crc kubenswrapper[4731]: I1125 14:56:02.811546 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:56:02 crc kubenswrapper[4731]: I1125 14:56:02.811545 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:56:02 crc kubenswrapper[4731]: E1125 14:56:02.812841 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:56:02 crc kubenswrapper[4731]: I1125 14:56:02.812938 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:56:02 crc kubenswrapper[4731]: E1125 14:56:02.812997 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:56:02 crc kubenswrapper[4731]: E1125 14:56:02.813174 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:56:02 crc kubenswrapper[4731]: E1125 14:56:02.813270 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:56:02 crc kubenswrapper[4731]: E1125 14:56:02.971350 4731 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 14:56:04 crc kubenswrapper[4731]: I1125 14:56:04.815984 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:56:04 crc kubenswrapper[4731]: I1125 14:56:04.816311 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:56:04 crc kubenswrapper[4731]: I1125 14:56:04.816163 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:56:04 crc kubenswrapper[4731]: E1125 14:56:04.816555 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:56:04 crc kubenswrapper[4731]: I1125 14:56:04.816141 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:56:04 crc kubenswrapper[4731]: E1125 14:56:04.816856 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:56:04 crc kubenswrapper[4731]: E1125 14:56:04.817531 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:56:04 crc kubenswrapper[4731]: E1125 14:56:04.818266 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:56:06 crc kubenswrapper[4731]: I1125 14:56:06.811281 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:56:06 crc kubenswrapper[4731]: E1125 14:56:06.811442 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:56:06 crc kubenswrapper[4731]: I1125 14:56:06.811306 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:56:06 crc kubenswrapper[4731]: I1125 14:56:06.811282 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:56:06 crc kubenswrapper[4731]: E1125 14:56:06.811784 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:56:06 crc kubenswrapper[4731]: I1125 14:56:06.811806 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:56:06 crc kubenswrapper[4731]: E1125 14:56:06.811909 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:56:06 crc kubenswrapper[4731]: E1125 14:56:06.811979 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:56:07 crc kubenswrapper[4731]: E1125 14:56:07.972321 4731 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 14:56:08 crc kubenswrapper[4731]: I1125 14:56:08.810832 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:56:08 crc kubenswrapper[4731]: I1125 14:56:08.810893 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:56:08 crc kubenswrapper[4731]: E1125 14:56:08.810995 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:56:08 crc kubenswrapper[4731]: E1125 14:56:08.811220 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:56:08 crc kubenswrapper[4731]: I1125 14:56:08.811278 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:56:08 crc kubenswrapper[4731]: E1125 14:56:08.811337 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:56:08 crc kubenswrapper[4731]: I1125 14:56:08.811269 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:56:08 crc kubenswrapper[4731]: E1125 14:56:08.811411 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:56:10 crc kubenswrapper[4731]: I1125 14:56:10.811287 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:56:10 crc kubenswrapper[4731]: I1125 14:56:10.811287 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:56:10 crc kubenswrapper[4731]: I1125 14:56:10.811289 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:56:10 crc kubenswrapper[4731]: E1125 14:56:10.811442 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:56:10 crc kubenswrapper[4731]: E1125 14:56:10.811511 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:56:10 crc kubenswrapper[4731]: I1125 14:56:10.811553 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:56:10 crc kubenswrapper[4731]: E1125 14:56:10.811611 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:56:10 crc kubenswrapper[4731]: E1125 14:56:10.811665 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:56:10 crc kubenswrapper[4731]: I1125 14:56:10.811680 4731 scope.go:117] "RemoveContainer" containerID="fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6" Nov 25 14:56:11 crc kubenswrapper[4731]: I1125 14:56:11.590131 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovnkube-controller/3.log" Nov 25 14:56:11 crc kubenswrapper[4731]: I1125 14:56:11.593161 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerStarted","Data":"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e"} Nov 25 14:56:11 crc kubenswrapper[4731]: I1125 14:56:11.593618 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:56:11 crc kubenswrapper[4731]: I1125 14:56:11.629500 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podStartSLOduration=105.629482335 podStartE2EDuration="1m45.629482335s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:11.628041397 +0000 UTC m=+130.375175847" watchObservedRunningTime="2025-11-25 14:56:11.629482335 +0000 UTC m=+130.376616785" Nov 25 14:56:11 crc kubenswrapper[4731]: I1125 14:56:11.758467 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7xvpg"] Nov 25 14:56:11 crc kubenswrapper[4731]: I1125 14:56:11.758592 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:56:11 crc kubenswrapper[4731]: E1125 14:56:11.758690 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:56:12 crc kubenswrapper[4731]: I1125 14:56:12.811127 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:56:12 crc kubenswrapper[4731]: I1125 14:56:12.811150 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:56:12 crc kubenswrapper[4731]: I1125 14:56:12.811272 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:56:12 crc kubenswrapper[4731]: E1125 14:56:12.813283 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:56:12 crc kubenswrapper[4731]: E1125 14:56:12.813464 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:56:12 crc kubenswrapper[4731]: E1125 14:56:12.813643 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:56:12 crc kubenswrapper[4731]: E1125 14:56:12.973934 4731 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Nov 25 14:56:13 crc kubenswrapper[4731]: I1125 14:56:13.810644 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:56:13 crc kubenswrapper[4731]: E1125 14:56:13.810929 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:56:14 crc kubenswrapper[4731]: I1125 14:56:14.811136 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:56:14 crc kubenswrapper[4731]: E1125 14:56:14.811410 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:56:14 crc kubenswrapper[4731]: I1125 14:56:14.811455 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:56:14 crc kubenswrapper[4731]: I1125 14:56:14.811515 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:56:14 crc kubenswrapper[4731]: E1125 14:56:14.811981 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:56:14 crc kubenswrapper[4731]: E1125 14:56:14.812030 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:56:14 crc kubenswrapper[4731]: I1125 14:56:14.812163 4731 scope.go:117] "RemoveContainer" containerID="471b56a9f37dafa266c74c8ad94d25f2ba08b9c1f907a69ba186a3ff2b1517d2" Nov 25 14:56:15 crc kubenswrapper[4731]: I1125 14:56:15.615309 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xx4w_062d0f18-096f-4f99-9faa-74dbd454bcd0/kube-multus/1.log" Nov 25 14:56:15 crc kubenswrapper[4731]: I1125 14:56:15.615388 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8xx4w" event={"ID":"062d0f18-096f-4f99-9faa-74dbd454bcd0","Type":"ContainerStarted","Data":"9e6b96b089080d9bf222e63145a44ce77ca0367c32031873757a6d7124b3aae9"} Nov 25 14:56:15 crc kubenswrapper[4731]: I1125 14:56:15.811135 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:56:15 crc kubenswrapper[4731]: E1125 14:56:15.811379 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:56:16 crc kubenswrapper[4731]: I1125 14:56:16.811235 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:56:16 crc kubenswrapper[4731]: E1125 14:56:16.812231 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Nov 25 14:56:16 crc kubenswrapper[4731]: I1125 14:56:16.812670 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:56:16 crc kubenswrapper[4731]: E1125 14:56:16.812747 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Nov 25 14:56:16 crc kubenswrapper[4731]: I1125 14:56:16.813033 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:56:16 crc kubenswrapper[4731]: E1125 14:56:16.813109 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Nov 25 14:56:17 crc kubenswrapper[4731]: I1125 14:56:17.810734 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:56:17 crc kubenswrapper[4731]: E1125 14:56:17.811312 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-7xvpg" podUID="12559634-427c-4ce7-b9e8-d2dd37299642" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.525461 4731 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.562608 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.565993 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.566451 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.566925 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.567150 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.567801 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.571499 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.572121 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-nzzxp"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.572386 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n47rp"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.573597 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.575270 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-nzzxp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.576091 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c2m7w"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.576543 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.576875 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.578647 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.579204 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.579209 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.579334 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.579376 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.579469 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.579566 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.579598 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.579691 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.579793 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.579721 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.579760 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.579958 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.580006 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.580639 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.581008 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.584384 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vpcm5"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.584608 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.585057 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gf6k5"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.585331 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-dhtk5"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.585442 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.585672 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.585840 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gf6k5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.587219 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fp5cw"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.587857 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.594933 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.596907 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.601302 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qb6n"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.604287 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.604500 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.609135 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.609915 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.609959 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.613026 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.622019 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.622635 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mvw8v"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.623114 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.623810 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qb6n" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.624148 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.624983 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nrh95"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.625555 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-nrh95" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.625631 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.626445 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.626846 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.630576 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.632400 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.632693 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.633651 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.633835 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.634251 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.634506 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.634628 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.634745 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.634893 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.634996 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.635158 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.635295 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.635432 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.635485 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.635617 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.635720 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.635807 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.636050 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.636085 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.636219 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.636275 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.636326 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.636448 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.636555 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.636622 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.636802 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.636904 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.636956 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.637023 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.637082 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.637125 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.637158 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.637220 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.637257 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.637084 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.637317 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.637311 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.637425 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.637495 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.637547 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.637652 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.637719 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.637753 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.637942 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.638100 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.638294 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.638459 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.638559 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.638790 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.640509 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.639128 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.641902 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.643200 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.643578 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.643630 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.643731 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.643887 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.644699 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.644874 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.644951 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.656905 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.657290 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.657369 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.659321 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.660491 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.662084 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.662521 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.662770 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.663436 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.664198 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.665223 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.665302 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.665381 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.665642 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.665908 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.666407 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.672367 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.673147 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.674593 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.685731 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fjq2t"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.686407 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zgzqt"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.687001 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgzqt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.688194 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.688503 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fjq2t" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.690648 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.690703 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eed496a-1d92-43f0-8571-6667803a45d0-serving-cert\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.690737 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.690763 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6ef9a3c-21db-4567-a83c-16f1836b4393-serving-cert\") pod \"console-operator-58897d9998-gf6k5\" (UID: \"a6ef9a3c-21db-4567-a83c-16f1836b4393\") " pod="openshift-console-operator/console-operator-58897d9998-gf6k5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.690788 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8eed496a-1d92-43f0-8571-6667803a45d0-etcd-client\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.690829 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-config\") pod \"controller-manager-879f6c89f-c2m7w\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.690853 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-audit-policies\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.690877 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.690911 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a254a00-2dcd-43cc-8859-fe0e4efed204-serving-cert\") pod \"authentication-operator-69f744f599-mvw8v\" (UID: \"5a254a00-2dcd-43cc-8859-fe0e4efed204\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.690937 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82912045-fbea-4058-b4a4-e462a37d85b7-config\") pod \"kube-apiserver-operator-766d6c64bb-jxnpg\" (UID: \"82912045-fbea-4058-b4a4-e462a37d85b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.690961 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eccda64-70d4-41d9-83bd-5e49545cb4ce-config\") pod \"machine-approver-56656f9798-29qbz\" (UID: \"8eccda64-70d4-41d9-83bd-5e49545cb4ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.690981 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddc212cb-2fed-4211-be15-5bd5c25d479a-config\") pod \"kube-controller-manager-operator-78b949d7b-tbhxk\" (UID: \"ddc212cb-2fed-4211-be15-5bd5c25d479a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691003 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a254a00-2dcd-43cc-8859-fe0e4efed204-service-ca-bundle\") pod \"authentication-operator-69f744f599-mvw8v\" (UID: \"5a254a00-2dcd-43cc-8859-fe0e4efed204\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691027 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eed496a-1d92-43f0-8571-6667803a45d0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691052 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a254a00-2dcd-43cc-8859-fe0e4efed204-config\") pod \"authentication-operator-69f744f599-mvw8v\" (UID: \"5a254a00-2dcd-43cc-8859-fe0e4efed204\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691077 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a254a00-2dcd-43cc-8859-fe0e4efed204-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mvw8v\" (UID: \"5a254a00-2dcd-43cc-8859-fe0e4efed204\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691101 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8eccda64-70d4-41d9-83bd-5e49545cb4ce-auth-proxy-config\") pod \"machine-approver-56656f9798-29qbz\" (UID: \"8eccda64-70d4-41d9-83bd-5e49545cb4ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691123 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691146 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6ef9a3c-21db-4567-a83c-16f1836b4393-trusted-ca\") pod \"console-operator-58897d9998-gf6k5\" (UID: \"a6ef9a3c-21db-4567-a83c-16f1836b4393\") " pod="openshift-console-operator/console-operator-58897d9998-gf6k5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691173 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vg7b\" (UniqueName: \"kubernetes.io/projected/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-kube-api-access-6vg7b\") pod \"route-controller-manager-6576b87f9c-p4sns\" (UID: \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691201 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgr55\" (UniqueName: \"kubernetes.io/projected/fc54c4d0-18b0-4249-9e18-d8a31c0135ae-kube-api-access-qgr55\") pod \"downloads-7954f5f757-nzzxp\" (UID: \"fc54c4d0-18b0-4249-9e18-d8a31c0135ae\") " pod="openshift-console/downloads-7954f5f757-nzzxp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691226 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzhwf\" (UniqueName: \"kubernetes.io/projected/8eccda64-70d4-41d9-83bd-5e49545cb4ce-kube-api-access-zzhwf\") pod \"machine-approver-56656f9798-29qbz\" (UID: \"8eccda64-70d4-41d9-83bd-5e49545cb4ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691249 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691275 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71-serving-cert\") pod \"openshift-config-operator-7777fb866f-qtqcb\" (UID: \"89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691300 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-serving-cert\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691328 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75nfb\" (UniqueName: \"kubernetes.io/projected/e5ccef3b-b78c-42b7-aa17-b52800f28cc0-kube-api-access-75nfb\") pod \"dns-operator-744455d44c-nrh95\" (UID: \"e5ccef3b-b78c-42b7-aa17-b52800f28cc0\") " pod="openshift-dns-operator/dns-operator-744455d44c-nrh95" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691356 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-node-pullsecrets\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691378 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-etcd-serving-ca\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691406 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnp7x\" (UniqueName: \"kubernetes.io/projected/e3c25ac6-1dcc-4c2e-9983-760215771431-kube-api-access-rnp7x\") pod \"openshift-controller-manager-operator-756b6f6bc6-dhjlm\" (UID: \"e3c25ac6-1dcc-4c2e-9983-760215771431\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691432 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/740209e9-0d7f-4da0-b68f-2ef5628039df-config\") pod \"machine-api-operator-5694c8668f-vpcm5\" (UID: \"740209e9-0d7f-4da0-b68f-2ef5628039df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691453 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691477 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/02f57d03-52ab-4660-bac1-d57f74e53e0e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mfgp7\" (UID: \"02f57d03-52ab-4660-bac1-d57f74e53e0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691497 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-encryption-config\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691518 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3c25ac6-1dcc-4c2e-9983-760215771431-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dhjlm\" (UID: \"e3c25ac6-1dcc-4c2e-9983-760215771431\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691553 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eea531d8-c1b6-4b38-908e-e385c965d06e-audit-dir\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691581 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-trusted-ca-bundle\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691601 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691644 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whphm\" (UniqueName: \"kubernetes.io/projected/89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71-kube-api-access-whphm\") pod \"openshift-config-operator-7777fb866f-qtqcb\" (UID: \"89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691666 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691694 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-config\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691719 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-service-ca\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691742 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7gnj\" (UniqueName: \"kubernetes.io/projected/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-kube-api-access-n7gnj\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691766 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-oauth-serving-cert\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691790 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8eed496a-1d92-43f0-8571-6667803a45d0-audit-dir\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691840 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c2m7w\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691867 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkx8r\" (UniqueName: \"kubernetes.io/projected/02f57d03-52ab-4660-bac1-d57f74e53e0e-kube-api-access-vkx8r\") pod \"cluster-image-registry-operator-dc59b4c8b-mfgp7\" (UID: \"02f57d03-52ab-4660-bac1-d57f74e53e0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691890 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfplq\" (UniqueName: \"kubernetes.io/projected/5a457d06-8bf4-4077-a3e9-d25ff971832b-kube-api-access-jfplq\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691912 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7966982c-d1bf-409d-a672-6bbaf0d1c954-metrics-tls\") pod \"ingress-operator-5b745b69d9-4rz42\" (UID: \"7966982c-d1bf-409d-a672-6bbaf0d1c954\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691936 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-serving-cert\") pod \"route-controller-manager-6576b87f9c-p4sns\" (UID: \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691963 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/02f57d03-52ab-4660-bac1-d57f74e53e0e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mfgp7\" (UID: \"02f57d03-52ab-4660-bac1-d57f74e53e0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.691988 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvz9h\" (UniqueName: \"kubernetes.io/projected/a6ef9a3c-21db-4567-a83c-16f1836b4393-kube-api-access-pvz9h\") pod \"console-operator-58897d9998-gf6k5\" (UID: \"a6ef9a3c-21db-4567-a83c-16f1836b4393\") " pod="openshift-console-operator/console-operator-58897d9998-gf6k5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692012 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-etcd-client\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692034 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7966982c-d1bf-409d-a672-6bbaf0d1c954-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4rz42\" (UID: \"7966982c-d1bf-409d-a672-6bbaf0d1c954\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692075 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-config\") pod \"route-controller-manager-6576b87f9c-p4sns\" (UID: \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692098 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8sl64\" (UID: \"b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692122 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-serving-cert\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692146 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6p7lk\" (UniqueName: \"kubernetes.io/projected/5a254a00-2dcd-43cc-8859-fe0e4efed204-kube-api-access-6p7lk\") pod \"authentication-operator-69f744f599-mvw8v\" (UID: \"5a254a00-2dcd-43cc-8859-fe0e4efed204\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692170 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xkdz\" (UniqueName: \"kubernetes.io/projected/740209e9-0d7f-4da0-b68f-2ef5628039df-kube-api-access-9xkdz\") pod \"machine-api-operator-5694c8668f-vpcm5\" (UID: \"740209e9-0d7f-4da0-b68f-2ef5628039df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692196 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-client-ca\") pod \"route-controller-manager-6576b87f9c-p4sns\" (UID: \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692225 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmxz7\" (UniqueName: \"kubernetes.io/projected/39a12893-02aa-41c9-94f9-2c3d11cff3bb-kube-api-access-nmxz7\") pod \"cluster-samples-operator-665b6dd947-4qb6n\" (UID: \"39a12893-02aa-41c9-94f9-2c3d11cff3bb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qb6n" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692252 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6ef9a3c-21db-4567-a83c-16f1836b4393-config\") pod \"console-operator-58897d9998-gf6k5\" (UID: \"a6ef9a3c-21db-4567-a83c-16f1836b4393\") " pod="openshift-console-operator/console-operator-58897d9998-gf6k5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692280 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82912045-fbea-4058-b4a4-e462a37d85b7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jxnpg\" (UID: \"82912045-fbea-4058-b4a4-e462a37d85b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692303 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692325 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lt5f\" (UniqueName: \"kubernetes.io/projected/7966982c-d1bf-409d-a672-6bbaf0d1c954-kube-api-access-5lt5f\") pod \"ingress-operator-5b745b69d9-4rz42\" (UID: \"7966982c-d1bf-409d-a672-6bbaf0d1c954\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692352 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/740209e9-0d7f-4da0-b68f-2ef5628039df-images\") pod \"machine-api-operator-5694c8668f-vpcm5\" (UID: \"740209e9-0d7f-4da0-b68f-2ef5628039df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692374 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-oauth-config\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692398 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7966982c-d1bf-409d-a672-6bbaf0d1c954-trusted-ca\") pod \"ingress-operator-5b745b69d9-4rz42\" (UID: \"7966982c-d1bf-409d-a672-6bbaf0d1c954\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692422 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692444 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-image-import-ca\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692486 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qtqcb\" (UID: \"89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692508 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692528 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-config\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692552 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8eed496a-1d92-43f0-8571-6667803a45d0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692574 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02f57d03-52ab-4660-bac1-d57f74e53e0e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mfgp7\" (UID: \"02f57d03-52ab-4660-bac1-d57f74e53e0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692603 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692626 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-audit-dir\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692649 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e5ccef3b-b78c-42b7-aa17-b52800f28cc0-metrics-tls\") pod \"dns-operator-744455d44c-nrh95\" (UID: \"e5ccef3b-b78c-42b7-aa17-b52800f28cc0\") " pod="openshift-dns-operator/dns-operator-744455d44c-nrh95" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692669 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8eed496a-1d92-43f0-8571-6667803a45d0-encryption-config\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692744 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7gzv\" (UniqueName: \"kubernetes.io/projected/8eed496a-1d92-43f0-8571-6667803a45d0-kube-api-access-g7gzv\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692769 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/740209e9-0d7f-4da0-b68f-2ef5628039df-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vpcm5\" (UID: \"740209e9-0d7f-4da0-b68f-2ef5628039df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.692795 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8eccda64-70d4-41d9-83bd-5e49545cb4ce-machine-approver-tls\") pod \"machine-approver-56656f9798-29qbz\" (UID: \"8eccda64-70d4-41d9-83bd-5e49545cb4ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.695368 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lscm9"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.696071 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.696412 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bbgwg"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.697106 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.697410 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.697590 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.701914 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/39a12893-02aa-41c9-94f9-2c3d11cff3bb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4qb6n\" (UID: \"39a12893-02aa-41c9-94f9-2c3d11cff3bb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qb6n" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.701965 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8sl64\" (UID: \"b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.701985 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddc212cb-2fed-4211-be15-5bd5c25d479a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tbhxk\" (UID: \"ddc212cb-2fed-4211-be15-5bd5c25d479a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.702009 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-client-ca\") pod \"controller-manager-879f6c89f-c2m7w\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.702027 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-664g5\" (UniqueName: \"kubernetes.io/projected/eea531d8-c1b6-4b38-908e-e385c965d06e-kube-api-access-664g5\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.702045 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-audit\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.702067 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3c25ac6-1dcc-4c2e-9983-760215771431-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dhjlm\" (UID: \"e3c25ac6-1dcc-4c2e-9983-760215771431\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.702090 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8eed496a-1d92-43f0-8571-6667803a45d0-audit-policies\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.702105 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-serving-cert\") pod \"controller-manager-879f6c89f-c2m7w\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.702125 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82912045-fbea-4058-b4a4-e462a37d85b7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jxnpg\" (UID: \"82912045-fbea-4058-b4a4-e462a37d85b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.702163 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w4xm\" (UniqueName: \"kubernetes.io/projected/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-kube-api-access-9w4xm\") pod \"controller-manager-879f6c89f-c2m7w\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.702184 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9vtx\" (UniqueName: \"kubernetes.io/projected/b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0-kube-api-access-j9vtx\") pod \"openshift-apiserver-operator-796bbdcf4f-8sl64\" (UID: \"b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.702201 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddc212cb-2fed-4211-be15-5bd5c25d479a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tbhxk\" (UID: \"ddc212cb-2fed-4211-be15-5bd5c25d479a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.709764 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-sclpc"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.710971 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.710057 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.710171 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.710233 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.713462 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.714030 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.716501 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.726024 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qpv96"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.726411 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.726873 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.727335 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.727638 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.727791 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qpv96" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.728534 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.732930 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.733664 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.734390 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t5mpt"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.734671 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.735321 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.740916 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.741600 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.742300 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.743664 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.744678 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.747001 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-grktp"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.751127 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.753202 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.754054 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-grktp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.754249 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.754518 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-d9dwj"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.755482 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-d9dwj" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.759277 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cmr7l"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.764128 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.769561 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.770409 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.772280 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.772912 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.773506 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.775120 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-nzzxp"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.776368 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-bspt9"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.777285 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bspt9" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.780162 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dhtk5"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.780356 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mvw8v"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.781888 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.785558 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.785612 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.785622 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fp5cw"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.791313 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.793364 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.794667 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gf6k5"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.798131 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.799329 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.801416 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.802953 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7gnj\" (UniqueName: \"kubernetes.io/projected/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-kube-api-access-n7gnj\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.802989 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkx8r\" (UniqueName: \"kubernetes.io/projected/02f57d03-52ab-4660-bac1-d57f74e53e0e-kube-api-access-vkx8r\") pod \"cluster-image-registry-operator-dc59b4c8b-mfgp7\" (UID: \"02f57d03-52ab-4660-bac1-d57f74e53e0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803023 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-serving-cert\") pod \"route-controller-manager-6576b87f9c-p4sns\" (UID: \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803055 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7966982c-d1bf-409d-a672-6bbaf0d1c954-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4rz42\" (UID: \"7966982c-d1bf-409d-a672-6bbaf0d1c954\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803083 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-config\") pod \"route-controller-manager-6576b87f9c-p4sns\" (UID: \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803115 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/02f57d03-52ab-4660-bac1-d57f74e53e0e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mfgp7\" (UID: \"02f57d03-52ab-4660-bac1-d57f74e53e0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803143 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvz9h\" (UniqueName: \"kubernetes.io/projected/a6ef9a3c-21db-4567-a83c-16f1836b4393-kube-api-access-pvz9h\") pod \"console-operator-58897d9998-gf6k5\" (UID: \"a6ef9a3c-21db-4567-a83c-16f1836b4393\") " pod="openshift-console-operator/console-operator-58897d9998-gf6k5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803169 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8sl64\" (UID: \"b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803247 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a160b766-12c1-4549-b892-794fe9009a9e-plugins-dir\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803278 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3bfcce86-59b2-4bfb-b8da-9a7eb76e766c-images\") pod \"machine-config-operator-74547568cd-5jpdg\" (UID: \"3bfcce86-59b2-4bfb-b8da-9a7eb76e766c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803309 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6ef9a3c-21db-4567-a83c-16f1836b4393-config\") pod \"console-operator-58897d9998-gf6k5\" (UID: \"a6ef9a3c-21db-4567-a83c-16f1836b4393\") " pod="openshift-console-operator/console-operator-58897d9998-gf6k5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803336 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a160b766-12c1-4549-b892-794fe9009a9e-mountpoint-dir\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803360 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a160b766-12c1-4549-b892-794fe9009a9e-csi-data-dir\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803386 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82912045-fbea-4058-b4a4-e462a37d85b7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jxnpg\" (UID: \"82912045-fbea-4058-b4a4-e462a37d85b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803413 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/740209e9-0d7f-4da0-b68f-2ef5628039df-images\") pod \"machine-api-operator-5694c8668f-vpcm5\" (UID: \"740209e9-0d7f-4da0-b68f-2ef5628039df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803438 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803465 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-image-import-ca\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803493 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02f57d03-52ab-4660-bac1-d57f74e53e0e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mfgp7\" (UID: \"02f57d03-52ab-4660-bac1-d57f74e53e0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803523 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3bfcce86-59b2-4bfb-b8da-9a7eb76e766c-proxy-tls\") pod \"machine-config-operator-74547568cd-5jpdg\" (UID: \"3bfcce86-59b2-4bfb-b8da-9a7eb76e766c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803552 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-audit-dir\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803579 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e5ccef3b-b78c-42b7-aa17-b52800f28cc0-metrics-tls\") pod \"dns-operator-744455d44c-nrh95\" (UID: \"e5ccef3b-b78c-42b7-aa17-b52800f28cc0\") " pod="openshift-dns-operator/dns-operator-744455d44c-nrh95" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803607 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8eccda64-70d4-41d9-83bd-5e49545cb4ce-machine-approver-tls\") pod \"machine-approver-56656f9798-29qbz\" (UID: \"8eccda64-70d4-41d9-83bd-5e49545cb4ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803640 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8380637d-e32d-4018-8081-28b3076fad29-config-volume\") pod \"collect-profiles-29401365-x2qcr\" (UID: \"8380637d-e32d-4018-8081-28b3076fad29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803674 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddc212cb-2fed-4211-be15-5bd5c25d479a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tbhxk\" (UID: \"ddc212cb-2fed-4211-be15-5bd5c25d479a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803697 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a160b766-12c1-4549-b892-794fe9009a9e-registration-dir\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803723 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltjl4\" (UniqueName: \"kubernetes.io/projected/3bfcce86-59b2-4bfb-b8da-9a7eb76e766c-kube-api-access-ltjl4\") pod \"machine-config-operator-74547568cd-5jpdg\" (UID: \"3bfcce86-59b2-4bfb-b8da-9a7eb76e766c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803757 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-client-ca\") pod \"controller-manager-879f6c89f-c2m7w\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803787 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-664g5\" (UniqueName: \"kubernetes.io/projected/eea531d8-c1b6-4b38-908e-e385c965d06e-kube-api-access-664g5\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803836 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4979\" (UniqueName: \"kubernetes.io/projected/80231337-3fb3-499b-a626-48423f329e0b-kube-api-access-q4979\") pod \"multus-admission-controller-857f4d67dd-fjq2t\" (UID: \"80231337-3fb3-499b-a626-48423f329e0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fjq2t" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803869 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-serving-cert\") pod \"controller-manager-879f6c89f-c2m7w\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803896 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0b48e8f6-4a4f-40da-8304-69ed663dc05b-srv-cert\") pod \"catalog-operator-68c6474976-tkwxv\" (UID: \"0b48e8f6-4a4f-40da-8304-69ed663dc05b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803925 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w4xm\" (UniqueName: \"kubernetes.io/projected/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-kube-api-access-9w4xm\") pod \"controller-manager-879f6c89f-c2m7w\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803958 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eed496a-1d92-43f0-8571-6667803a45d0-serving-cert\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.803991 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804040 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6ef9a3c-21db-4567-a83c-16f1836b4393-serving-cert\") pod \"console-operator-58897d9998-gf6k5\" (UID: \"a6ef9a3c-21db-4567-a83c-16f1836b4393\") " pod="openshift-console-operator/console-operator-58897d9998-gf6k5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804069 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8eed496a-1d92-43f0-8571-6667803a45d0-etcd-client\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804096 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-config\") pod \"controller-manager-879f6c89f-c2m7w\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804122 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-audit-policies\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804151 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804192 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e59cf0b-743e-430a-b054-42dc854f20dc-config\") pod \"service-ca-operator-777779d784-grktp\" (UID: \"2e59cf0b-743e-430a-b054-42dc854f20dc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-grktp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804221 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c84e85c2-1b8f-4e13-9b6c-9824981d26e8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l8zwj\" (UID: \"c84e85c2-1b8f-4e13-9b6c-9824981d26e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804245 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eccda64-70d4-41d9-83bd-5e49545cb4ce-config\") pod \"machine-approver-56656f9798-29qbz\" (UID: \"8eccda64-70d4-41d9-83bd-5e49545cb4ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804264 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a254a00-2dcd-43cc-8859-fe0e4efed204-service-ca-bundle\") pod \"authentication-operator-69f744f599-mvw8v\" (UID: \"5a254a00-2dcd-43cc-8859-fe0e4efed204\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804289 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eed496a-1d92-43f0-8571-6667803a45d0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804309 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a254a00-2dcd-43cc-8859-fe0e4efed204-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mvw8v\" (UID: \"5a254a00-2dcd-43cc-8859-fe0e4efed204\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804330 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t5mpt\" (UID: \"eb067501-cc83-4cdd-ad52-224774a043c4\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804350 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t5mpt\" (UID: \"eb067501-cc83-4cdd-ad52-224774a043c4\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804400 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8eccda64-70d4-41d9-83bd-5e49545cb4ce-auth-proxy-config\") pod \"machine-approver-56656f9798-29qbz\" (UID: \"8eccda64-70d4-41d9-83bd-5e49545cb4ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804421 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vg7b\" (UniqueName: \"kubernetes.io/projected/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-kube-api-access-6vg7b\") pod \"route-controller-manager-6576b87f9c-p4sns\" (UID: \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804445 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e59cf0b-743e-430a-b054-42dc854f20dc-serving-cert\") pod \"service-ca-operator-777779d784-grktp\" (UID: \"2e59cf0b-743e-430a-b054-42dc854f20dc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-grktp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804467 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzhwf\" (UniqueName: \"kubernetes.io/projected/8eccda64-70d4-41d9-83bd-5e49545cb4ce-kube-api-access-zzhwf\") pod \"machine-approver-56656f9798-29qbz\" (UID: \"8eccda64-70d4-41d9-83bd-5e49545cb4ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804489 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-serving-cert\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804508 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3bfcce86-59b2-4bfb-b8da-9a7eb76e766c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5jpdg\" (UID: \"3bfcce86-59b2-4bfb-b8da-9a7eb76e766c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804527 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-node-pullsecrets\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804548 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnp7x\" (UniqueName: \"kubernetes.io/projected/e3c25ac6-1dcc-4c2e-9983-760215771431-kube-api-access-rnp7x\") pod \"openshift-controller-manager-operator-756b6f6bc6-dhjlm\" (UID: \"e3c25ac6-1dcc-4c2e-9983-760215771431\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804572 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/02f57d03-52ab-4660-bac1-d57f74e53e0e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mfgp7\" (UID: \"02f57d03-52ab-4660-bac1-d57f74e53e0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804593 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-encryption-config\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804587 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6ef9a3c-21db-4567-a83c-16f1836b4393-config\") pod \"console-operator-58897d9998-gf6k5\" (UID: \"a6ef9a3c-21db-4567-a83c-16f1836b4393\") " pod="openshift-console-operator/console-operator-58897d9998-gf6k5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804627 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-config\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804651 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-oauth-serving-cert\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804647 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-config\") pod \"route-controller-manager-6576b87f9c-p4sns\" (UID: \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804675 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvj7m\" (UniqueName: \"kubernetes.io/projected/2e59cf0b-743e-430a-b054-42dc854f20dc-kube-api-access-vvj7m\") pod \"service-ca-operator-777779d784-grktp\" (UID: \"2e59cf0b-743e-430a-b054-42dc854f20dc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-grktp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804774 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8eed496a-1d92-43f0-8571-6667803a45d0-audit-dir\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804847 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c2m7w\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804880 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfplq\" (UniqueName: \"kubernetes.io/projected/5a457d06-8bf4-4077-a3e9-d25ff971832b-kube-api-access-jfplq\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804905 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7966982c-d1bf-409d-a672-6bbaf0d1c954-metrics-tls\") pod \"ingress-operator-5b745b69d9-4rz42\" (UID: \"7966982c-d1bf-409d-a672-6bbaf0d1c954\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804936 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c84e85c2-1b8f-4e13-9b6c-9824981d26e8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l8zwj\" (UID: \"c84e85c2-1b8f-4e13-9b6c-9824981d26e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804966 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-etcd-client\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.804992 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-serving-cert\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805018 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6p7lk\" (UniqueName: \"kubernetes.io/projected/5a254a00-2dcd-43cc-8859-fe0e4efed204-kube-api-access-6p7lk\") pod \"authentication-operator-69f744f599-mvw8v\" (UID: \"5a254a00-2dcd-43cc-8859-fe0e4efed204\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805051 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zvdm\" (UniqueName: \"kubernetes.io/projected/9e2c002e-79b6-44a2-9789-fa2da26f3cd5-kube-api-access-7zvdm\") pod \"service-ca-9c57cc56f-d9dwj\" (UID: \"9e2c002e-79b6-44a2-9789-fa2da26f3cd5\") " pod="openshift-service-ca/service-ca-9c57cc56f-d9dwj" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805083 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xkdz\" (UniqueName: \"kubernetes.io/projected/740209e9-0d7f-4da0-b68f-2ef5628039df-kube-api-access-9xkdz\") pod \"machine-api-operator-5694c8668f-vpcm5\" (UID: \"740209e9-0d7f-4da0-b68f-2ef5628039df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805105 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-client-ca\") pod \"route-controller-manager-6576b87f9c-p4sns\" (UID: \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805137 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmxz7\" (UniqueName: \"kubernetes.io/projected/39a12893-02aa-41c9-94f9-2c3d11cff3bb-kube-api-access-nmxz7\") pod \"cluster-samples-operator-665b6dd947-4qb6n\" (UID: \"39a12893-02aa-41c9-94f9-2c3d11cff3bb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qb6n" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805165 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c84e85c2-1b8f-4e13-9b6c-9824981d26e8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l8zwj\" (UID: \"c84e85c2-1b8f-4e13-9b6c-9824981d26e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805192 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lt5f\" (UniqueName: \"kubernetes.io/projected/7966982c-d1bf-409d-a672-6bbaf0d1c954-kube-api-access-5lt5f\") pod \"ingress-operator-5b745b69d9-4rz42\" (UID: \"7966982c-d1bf-409d-a672-6bbaf0d1c954\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805222 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805262 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7966982c-d1bf-409d-a672-6bbaf0d1c954-trusted-ca\") pod \"ingress-operator-5b745b69d9-4rz42\" (UID: \"7966982c-d1bf-409d-a672-6bbaf0d1c954\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805289 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-oauth-config\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805320 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805349 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a160b766-12c1-4549-b892-794fe9009a9e-socket-dir\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805377 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-config\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805417 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qtqcb\" (UID: \"89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805443 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8eed496a-1d92-43f0-8571-6667803a45d0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805471 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805497 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8380637d-e32d-4018-8081-28b3076fad29-secret-volume\") pod \"collect-profiles-29401365-x2qcr\" (UID: \"8380637d-e32d-4018-8081-28b3076fad29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805503 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eed496a-1d92-43f0-8571-6667803a45d0-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805520 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8eed496a-1d92-43f0-8571-6667803a45d0-encryption-config\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805601 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7gzv\" (UniqueName: \"kubernetes.io/projected/8eed496a-1d92-43f0-8571-6667803a45d0-kube-api-access-g7gzv\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805666 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/740209e9-0d7f-4da0-b68f-2ef5628039df-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vpcm5\" (UID: \"740209e9-0d7f-4da0-b68f-2ef5628039df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805722 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dc67\" (UniqueName: \"kubernetes.io/projected/013533b2-fe47-4b12-9ecf-98705944a1eb-kube-api-access-8dc67\") pod \"control-plane-machine-set-operator-78cbb6b69f-qpv96\" (UID: \"013533b2-fe47-4b12-9ecf-98705944a1eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qpv96" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805756 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/39a12893-02aa-41c9-94f9-2c3d11cff3bb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4qb6n\" (UID: \"39a12893-02aa-41c9-94f9-2c3d11cff3bb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qb6n" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805806 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8sl64\" (UID: \"b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805867 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-audit\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805892 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82912045-fbea-4058-b4a4-e462a37d85b7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jxnpg\" (UID: \"82912045-fbea-4058-b4a4-e462a37d85b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805935 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3c25ac6-1dcc-4c2e-9983-760215771431-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dhjlm\" (UID: \"e3c25ac6-1dcc-4c2e-9983-760215771431\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805962 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8eed496a-1d92-43f0-8571-6667803a45d0-audit-policies\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.805991 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9vtx\" (UniqueName: \"kubernetes.io/projected/b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0-kube-api-access-j9vtx\") pod \"openshift-apiserver-operator-796bbdcf4f-8sl64\" (UID: \"b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806016 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddc212cb-2fed-4211-be15-5bd5c25d479a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tbhxk\" (UID: \"ddc212cb-2fed-4211-be15-5bd5c25d479a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806072 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp55p\" (UniqueName: \"kubernetes.io/projected/8380637d-e32d-4018-8081-28b3076fad29-kube-api-access-sp55p\") pod \"collect-profiles-29401365-x2qcr\" (UID: \"8380637d-e32d-4018-8081-28b3076fad29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806119 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806149 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80231337-3fb3-499b-a626-48423f329e0b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fjq2t\" (UID: \"80231337-3fb3-499b-a626-48423f329e0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fjq2t" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806178 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82912045-fbea-4058-b4a4-e462a37d85b7-config\") pod \"kube-apiserver-operator-766d6c64bb-jxnpg\" (UID: \"82912045-fbea-4058-b4a4-e462a37d85b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806201 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddc212cb-2fed-4211-be15-5bd5c25d479a-config\") pod \"kube-controller-manager-operator-78b949d7b-tbhxk\" (UID: \"ddc212cb-2fed-4211-be15-5bd5c25d479a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806229 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a254a00-2dcd-43cc-8859-fe0e4efed204-serving-cert\") pod \"authentication-operator-69f744f599-mvw8v\" (UID: \"5a254a00-2dcd-43cc-8859-fe0e4efed204\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806252 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jcl7\" (UniqueName: \"kubernetes.io/projected/ace20ee2-d2d1-402e-9a3d-1b3f4fb2491c-kube-api-access-7jcl7\") pod \"migrator-59844c95c7-zgzqt\" (UID: \"ace20ee2-d2d1-402e-9a3d-1b3f4fb2491c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgzqt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806271 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv274\" (UniqueName: \"kubernetes.io/projected/eb067501-cc83-4cdd-ad52-224774a043c4-kube-api-access-mv274\") pod \"marketplace-operator-79b997595-t5mpt\" (UID: \"eb067501-cc83-4cdd-ad52-224774a043c4\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806293 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9e2c002e-79b6-44a2-9789-fa2da26f3cd5-signing-cabundle\") pod \"service-ca-9c57cc56f-d9dwj\" (UID: \"9e2c002e-79b6-44a2-9789-fa2da26f3cd5\") " pod="openshift-service-ca/service-ca-9c57cc56f-d9dwj" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806311 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a254a00-2dcd-43cc-8859-fe0e4efed204-config\") pod \"authentication-operator-69f744f599-mvw8v\" (UID: \"5a254a00-2dcd-43cc-8859-fe0e4efed204\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806331 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5gxj\" (UniqueName: \"kubernetes.io/projected/a160b766-12c1-4549-b892-794fe9009a9e-kube-api-access-x5gxj\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806352 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/013533b2-fe47-4b12-9ecf-98705944a1eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qpv96\" (UID: \"013533b2-fe47-4b12-9ecf-98705944a1eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qpv96" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806378 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806397 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6ef9a3c-21db-4567-a83c-16f1836b4393-trusted-ca\") pod \"console-operator-58897d9998-gf6k5\" (UID: \"a6ef9a3c-21db-4567-a83c-16f1836b4393\") " pod="openshift-console-operator/console-operator-58897d9998-gf6k5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806420 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgr55\" (UniqueName: \"kubernetes.io/projected/fc54c4d0-18b0-4249-9e18-d8a31c0135ae-kube-api-access-qgr55\") pod \"downloads-7954f5f757-nzzxp\" (UID: \"fc54c4d0-18b0-4249-9e18-d8a31c0135ae\") " pod="openshift-console/downloads-7954f5f757-nzzxp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806439 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806462 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71-serving-cert\") pod \"openshift-config-operator-7777fb866f-qtqcb\" (UID: \"89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806484 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75nfb\" (UniqueName: \"kubernetes.io/projected/e5ccef3b-b78c-42b7-aa17-b52800f28cc0-kube-api-access-75nfb\") pod \"dns-operator-744455d44c-nrh95\" (UID: \"e5ccef3b-b78c-42b7-aa17-b52800f28cc0\") " pod="openshift-dns-operator/dns-operator-744455d44c-nrh95" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806502 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-etcd-serving-ca\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806520 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0b48e8f6-4a4f-40da-8304-69ed663dc05b-profile-collector-cert\") pod \"catalog-operator-68c6474976-tkwxv\" (UID: \"0b48e8f6-4a4f-40da-8304-69ed663dc05b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806541 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/740209e9-0d7f-4da0-b68f-2ef5628039df-config\") pod \"machine-api-operator-5694c8668f-vpcm5\" (UID: \"740209e9-0d7f-4da0-b68f-2ef5628039df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806563 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806582 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9e2c002e-79b6-44a2-9789-fa2da26f3cd5-signing-key\") pod \"service-ca-9c57cc56f-d9dwj\" (UID: \"9e2c002e-79b6-44a2-9789-fa2da26f3cd5\") " pod="openshift-service-ca/service-ca-9c57cc56f-d9dwj" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806603 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3c25ac6-1dcc-4c2e-9983-760215771431-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dhjlm\" (UID: \"e3c25ac6-1dcc-4c2e-9983-760215771431\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806634 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eea531d8-c1b6-4b38-908e-e385c965d06e-audit-dir\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806654 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-trusted-ca-bundle\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806678 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806701 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806727 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-service-ca\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806757 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whphm\" (UniqueName: \"kubernetes.io/projected/89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71-kube-api-access-whphm\") pod \"openshift-config-operator-7777fb866f-qtqcb\" (UID: \"89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.806780 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf9f9\" (UniqueName: \"kubernetes.io/projected/0b48e8f6-4a4f-40da-8304-69ed663dc05b-kube-api-access-rf9f9\") pod \"catalog-operator-68c6474976-tkwxv\" (UID: \"0b48e8f6-4a4f-40da-8304-69ed663dc05b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.807215 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a254a00-2dcd-43cc-8859-fe0e4efed204-service-ca-bundle\") pod \"authentication-operator-69f744f599-mvw8v\" (UID: \"5a254a00-2dcd-43cc-8859-fe0e4efed204\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.807608 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.807657 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vpcm5"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.807762 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8eed496a-1d92-43f0-8571-6667803a45d0-audit-dir\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.809315 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8eccda64-70d4-41d9-83bd-5e49545cb4ce-auth-proxy-config\") pod \"machine-approver-56656f9798-29qbz\" (UID: \"8eccda64-70d4-41d9-83bd-5e49545cb4ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.809400 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/740209e9-0d7f-4da0-b68f-2ef5628039df-images\") pod \"machine-api-operator-5694c8668f-vpcm5\" (UID: \"740209e9-0d7f-4da0-b68f-2ef5628039df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.810452 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.811221 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-c2m7w\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.809458 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a254a00-2dcd-43cc-8859-fe0e4efed204-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-mvw8v\" (UID: \"5a254a00-2dcd-43cc-8859-fe0e4efed204\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.811602 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-image-import-ca\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.811651 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-node-pullsecrets\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.811996 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-config\") pod \"controller-manager-879f6c89f-c2m7w\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.812083 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-client-ca\") pod \"controller-manager-879f6c89f-c2m7w\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.812599 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-audit-policies\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.812634 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c2m7w"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.822169 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n47rp"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.822190 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.812764 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-audit-dir\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.813043 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8eccda64-70d4-41d9-83bd-5e49545cb4ce-config\") pod \"machine-approver-56656f9798-29qbz\" (UID: \"8eccda64-70d4-41d9-83bd-5e49545cb4ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.813195 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8eccda64-70d4-41d9-83bd-5e49545cb4ce-machine-approver-tls\") pod \"machine-approver-56656f9798-29qbz\" (UID: \"8eccda64-70d4-41d9-83bd-5e49545cb4ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.813594 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qtqcb\" (UID: \"89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.813675 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-serving-cert\") pod \"route-controller-manager-6576b87f9c-p4sns\" (UID: \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.813983 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6ef9a3c-21db-4567-a83c-16f1836b4393-serving-cert\") pod \"console-operator-58897d9998-gf6k5\" (UID: \"a6ef9a3c-21db-4567-a83c-16f1836b4393\") " pod="openshift-console-operator/console-operator-58897d9998-gf6k5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.814453 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-serving-cert\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.814547 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.814906 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-client-ca\") pod \"route-controller-manager-6576b87f9c-p4sns\" (UID: \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.814947 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8eed496a-1d92-43f0-8571-6667803a45d0-serving-cert\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.815695 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8eed496a-1d92-43f0-8571-6667803a45d0-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.816070 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0-config\") pod \"openshift-apiserver-operator-796bbdcf4f-8sl64\" (UID: \"b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.816404 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-audit\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.816497 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/02f57d03-52ab-4660-bac1-d57f74e53e0e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mfgp7\" (UID: \"02f57d03-52ab-4660-bac1-d57f74e53e0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.817142 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eea531d8-c1b6-4b38-908e-e385c965d06e-audit-dir\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.817175 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-config\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.817521 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.817903 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.818161 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8eed496a-1d92-43f0-8571-6667803a45d0-audit-policies\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.818441 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8eed496a-1d92-43f0-8571-6667803a45d0-etcd-client\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.818653 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a254a00-2dcd-43cc-8859-fe0e4efed204-config\") pod \"authentication-operator-69f744f599-mvw8v\" (UID: \"5a254a00-2dcd-43cc-8859-fe0e4efed204\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.819202 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-oauth-serving-cert\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.819230 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-config\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.819561 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-encryption-config\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.819886 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.820042 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.820524 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-trusted-ca-bundle\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.820868 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-service-ca\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.821442 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/740209e9-0d7f-4da0-b68f-2ef5628039df-config\") pod \"machine-api-operator-5694c8668f-vpcm5\" (UID: \"740209e9-0d7f-4da0-b68f-2ef5628039df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.821668 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-etcd-serving-ca\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.821768 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a6ef9a3c-21db-4567-a83c-16f1836b4393-trusted-ca\") pod \"console-operator-58897d9998-gf6k5\" (UID: \"a6ef9a3c-21db-4567-a83c-16f1836b4393\") " pod="openshift-console-operator/console-operator-58897d9998-gf6k5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.812801 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-etcd-client\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.812889 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.823458 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-serving-cert\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.824050 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7966982c-d1bf-409d-a672-6bbaf0d1c954-metrics-tls\") pod \"ingress-operator-5b745b69d9-4rz42\" (UID: \"7966982c-d1bf-409d-a672-6bbaf0d1c954\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.824088 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.824107 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7966982c-d1bf-409d-a672-6bbaf0d1c954-trusted-ca\") pod \"ingress-operator-5b745b69d9-4rz42\" (UID: \"7966982c-d1bf-409d-a672-6bbaf0d1c954\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.825137 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.825307 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8eed496a-1d92-43f0-8571-6667803a45d0-encryption-config\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.825456 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71-serving-cert\") pod \"openshift-config-operator-7777fb866f-qtqcb\" (UID: \"89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.825838 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.832126 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-bqsdf"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.832525 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3c25ac6-1dcc-4c2e-9983-760215771431-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dhjlm\" (UID: \"e3c25ac6-1dcc-4c2e-9983-760215771431\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.833067 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a254a00-2dcd-43cc-8859-fe0e4efed204-serving-cert\") pod \"authentication-operator-69f744f599-mvw8v\" (UID: \"5a254a00-2dcd-43cc-8859-fe0e4efed204\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.833263 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.833295 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-jghzg"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.833424 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/02f57d03-52ab-4660-bac1-d57f74e53e0e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mfgp7\" (UID: \"02f57d03-52ab-4660-bac1-d57f74e53e0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.833730 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.833892 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.834085 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/39a12893-02aa-41c9-94f9-2c3d11cff3bb-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4qb6n\" (UID: \"39a12893-02aa-41c9-94f9-2c3d11cff3bb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qb6n" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.834177 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.834189 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/740209e9-0d7f-4da0-b68f-2ef5628039df-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vpcm5\" (UID: \"740209e9-0d7f-4da0-b68f-2ef5628039df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.834241 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.834205 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bspt9"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.834386 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lscm9"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.834420 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.834432 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bqsdf" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.834529 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jghzg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.835359 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e5ccef3b-b78c-42b7-aa17-b52800f28cc0-metrics-tls\") pod \"dns-operator-744455d44c-nrh95\" (UID: \"e5ccef3b-b78c-42b7-aa17-b52800f28cc0\") " pod="openshift-dns-operator/dns-operator-744455d44c-nrh95" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.835409 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.836306 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qpv96"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.837735 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3c25ac6-1dcc-4c2e-9983-760215771431-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dhjlm\" (UID: \"e3c25ac6-1dcc-4c2e-9983-760215771431\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.838495 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-8sl64\" (UID: \"b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.838568 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cmr7l"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.838741 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.838950 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-serving-cert\") pod \"controller-manager-879f6c89f-c2m7w\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.845661 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-oauth-config\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.845796 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zgzqt"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.848400 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.851079 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.851433 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.852144 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t5mpt"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.854018 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.855236 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bbgwg"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.856363 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qb6n"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.857580 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nrh95"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.858908 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fjq2t"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.860098 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bqsdf"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.861717 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.862860 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-grktp"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.864660 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.866424 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-d9dwj"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.867913 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858"] Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.869661 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.889849 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.909040 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c84e85c2-1b8f-4e13-9b6c-9824981d26e8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l8zwj\" (UID: \"c84e85c2-1b8f-4e13-9b6c-9824981d26e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.909155 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zvdm\" (UniqueName: \"kubernetes.io/projected/9e2c002e-79b6-44a2-9789-fa2da26f3cd5-kube-api-access-7zvdm\") pod \"service-ca-9c57cc56f-d9dwj\" (UID: \"9e2c002e-79b6-44a2-9789-fa2da26f3cd5\") " pod="openshift-service-ca/service-ca-9c57cc56f-d9dwj" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.909375 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c84e85c2-1b8f-4e13-9b6c-9824981d26e8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l8zwj\" (UID: \"c84e85c2-1b8f-4e13-9b6c-9824981d26e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.909441 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a160b766-12c1-4549-b892-794fe9009a9e-socket-dir\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.909485 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8380637d-e32d-4018-8081-28b3076fad29-secret-volume\") pod \"collect-profiles-29401365-x2qcr\" (UID: \"8380637d-e32d-4018-8081-28b3076fad29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.909512 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dc67\" (UniqueName: \"kubernetes.io/projected/013533b2-fe47-4b12-9ecf-98705944a1eb-kube-api-access-8dc67\") pod \"control-plane-machine-set-operator-78cbb6b69f-qpv96\" (UID: \"013533b2-fe47-4b12-9ecf-98705944a1eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qpv96" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.909578 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp55p\" (UniqueName: \"kubernetes.io/projected/8380637d-e32d-4018-8081-28b3076fad29-kube-api-access-sp55p\") pod \"collect-profiles-29401365-x2qcr\" (UID: \"8380637d-e32d-4018-8081-28b3076fad29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.909633 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80231337-3fb3-499b-a626-48423f329e0b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fjq2t\" (UID: \"80231337-3fb3-499b-a626-48423f329e0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fjq2t" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.909674 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv274\" (UniqueName: \"kubernetes.io/projected/eb067501-cc83-4cdd-ad52-224774a043c4-kube-api-access-mv274\") pod \"marketplace-operator-79b997595-t5mpt\" (UID: \"eb067501-cc83-4cdd-ad52-224774a043c4\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.909738 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9e2c002e-79b6-44a2-9789-fa2da26f3cd5-signing-cabundle\") pod \"service-ca-9c57cc56f-d9dwj\" (UID: \"9e2c002e-79b6-44a2-9789-fa2da26f3cd5\") " pod="openshift-service-ca/service-ca-9c57cc56f-d9dwj" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.909783 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jcl7\" (UniqueName: \"kubernetes.io/projected/ace20ee2-d2d1-402e-9a3d-1b3f4fb2491c-kube-api-access-7jcl7\") pod \"migrator-59844c95c7-zgzqt\" (UID: \"ace20ee2-d2d1-402e-9a3d-1b3f4fb2491c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgzqt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.909824 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5gxj\" (UniqueName: \"kubernetes.io/projected/a160b766-12c1-4549-b892-794fe9009a9e-kube-api-access-x5gxj\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.909844 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/013533b2-fe47-4b12-9ecf-98705944a1eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qpv96\" (UID: \"013533b2-fe47-4b12-9ecf-98705944a1eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qpv96" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.909894 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0b48e8f6-4a4f-40da-8304-69ed663dc05b-profile-collector-cert\") pod \"catalog-operator-68c6474976-tkwxv\" (UID: \"0b48e8f6-4a4f-40da-8304-69ed663dc05b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.909912 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9e2c002e-79b6-44a2-9789-fa2da26f3cd5-signing-key\") pod \"service-ca-9c57cc56f-d9dwj\" (UID: \"9e2c002e-79b6-44a2-9789-fa2da26f3cd5\") " pod="openshift-service-ca/service-ca-9c57cc56f-d9dwj" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.909979 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf9f9\" (UniqueName: \"kubernetes.io/projected/0b48e8f6-4a4f-40da-8304-69ed663dc05b-kube-api-access-rf9f9\") pod \"catalog-operator-68c6474976-tkwxv\" (UID: \"0b48e8f6-4a4f-40da-8304-69ed663dc05b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910034 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910148 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a160b766-12c1-4549-b892-794fe9009a9e-mountpoint-dir\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910187 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a160b766-12c1-4549-b892-794fe9009a9e-socket-dir\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910045 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a160b766-12c1-4549-b892-794fe9009a9e-mountpoint-dir\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910347 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a160b766-12c1-4549-b892-794fe9009a9e-plugins-dir\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910391 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3bfcce86-59b2-4bfb-b8da-9a7eb76e766c-images\") pod \"machine-config-operator-74547568cd-5jpdg\" (UID: \"3bfcce86-59b2-4bfb-b8da-9a7eb76e766c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910429 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a160b766-12c1-4549-b892-794fe9009a9e-plugins-dir\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910437 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a160b766-12c1-4549-b892-794fe9009a9e-csi-data-dir\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910519 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3bfcce86-59b2-4bfb-b8da-9a7eb76e766c-proxy-tls\") pod \"machine-config-operator-74547568cd-5jpdg\" (UID: \"3bfcce86-59b2-4bfb-b8da-9a7eb76e766c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910541 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a160b766-12c1-4549-b892-794fe9009a9e-csi-data-dir\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910551 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8380637d-e32d-4018-8081-28b3076fad29-config-volume\") pod \"collect-profiles-29401365-x2qcr\" (UID: \"8380637d-e32d-4018-8081-28b3076fad29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910587 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a160b766-12c1-4549-b892-794fe9009a9e-registration-dir\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910610 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltjl4\" (UniqueName: \"kubernetes.io/projected/3bfcce86-59b2-4bfb-b8da-9a7eb76e766c-kube-api-access-ltjl4\") pod \"machine-config-operator-74547568cd-5jpdg\" (UID: \"3bfcce86-59b2-4bfb-b8da-9a7eb76e766c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910654 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4979\" (UniqueName: \"kubernetes.io/projected/80231337-3fb3-499b-a626-48423f329e0b-kube-api-access-q4979\") pod \"multus-admission-controller-857f4d67dd-fjq2t\" (UID: \"80231337-3fb3-499b-a626-48423f329e0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fjq2t" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910793 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0b48e8f6-4a4f-40da-8304-69ed663dc05b-srv-cert\") pod \"catalog-operator-68c6474976-tkwxv\" (UID: \"0b48e8f6-4a4f-40da-8304-69ed663dc05b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910721 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a160b766-12c1-4549-b892-794fe9009a9e-registration-dir\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.910901 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e59cf0b-743e-430a-b054-42dc854f20dc-config\") pod \"service-ca-operator-777779d784-grktp\" (UID: \"2e59cf0b-743e-430a-b054-42dc854f20dc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-grktp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.911063 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c84e85c2-1b8f-4e13-9b6c-9824981d26e8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l8zwj\" (UID: \"c84e85c2-1b8f-4e13-9b6c-9824981d26e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.911134 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t5mpt\" (UID: \"eb067501-cc83-4cdd-ad52-224774a043c4\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.911166 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t5mpt\" (UID: \"eb067501-cc83-4cdd-ad52-224774a043c4\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.911200 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e59cf0b-743e-430a-b054-42dc854f20dc-serving-cert\") pod \"service-ca-operator-777779d784-grktp\" (UID: \"2e59cf0b-743e-430a-b054-42dc854f20dc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-grktp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.911238 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3bfcce86-59b2-4bfb-b8da-9a7eb76e766c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5jpdg\" (UID: \"3bfcce86-59b2-4bfb-b8da-9a7eb76e766c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.911325 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvj7m\" (UniqueName: \"kubernetes.io/projected/2e59cf0b-743e-430a-b054-42dc854f20dc-kube-api-access-vvj7m\") pod \"service-ca-operator-777779d784-grktp\" (UID: \"2e59cf0b-743e-430a-b054-42dc854f20dc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-grktp" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.912368 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3bfcce86-59b2-4bfb-b8da-9a7eb76e766c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-5jpdg\" (UID: \"3bfcce86-59b2-4bfb-b8da-9a7eb76e766c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.919776 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/82912045-fbea-4058-b4a4-e462a37d85b7-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-jxnpg\" (UID: \"82912045-fbea-4058-b4a4-e462a37d85b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.929780 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.949013 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.956309 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/82912045-fbea-4058-b4a4-e462a37d85b7-config\") pod \"kube-apiserver-operator-766d6c64bb-jxnpg\" (UID: \"82912045-fbea-4058-b4a4-e462a37d85b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg" Nov 25 14:56:18 crc kubenswrapper[4731]: I1125 14:56:18.990323 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.009257 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.020056 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ddc212cb-2fed-4211-be15-5bd5c25d479a-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tbhxk\" (UID: \"ddc212cb-2fed-4211-be15-5bd5c25d479a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.029546 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.050251 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.056630 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddc212cb-2fed-4211-be15-5bd5c25d479a-config\") pod \"kube-controller-manager-operator-78b949d7b-tbhxk\" (UID: \"ddc212cb-2fed-4211-be15-5bd5c25d479a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.069037 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.090284 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.110302 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.123995 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/80231337-3fb3-499b-a626-48423f329e0b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-fjq2t\" (UID: \"80231337-3fb3-499b-a626-48423f329e0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fjq2t" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.130202 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.149762 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.169564 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.189677 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.210060 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.229691 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.250556 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.270604 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.289945 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.309731 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.330044 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.350434 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.370619 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.390355 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.410418 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.430840 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.450807 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.480261 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.490337 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.509886 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.529360 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.549776 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.570591 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.590142 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.610114 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.630477 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.649471 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.668954 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.689879 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.691544 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/3bfcce86-59b2-4bfb-b8da-9a7eb76e766c-images\") pod \"machine-config-operator-74547568cd-5jpdg\" (UID: \"3bfcce86-59b2-4bfb-b8da-9a7eb76e766c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.710299 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.711622 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8380637d-e32d-4018-8081-28b3076fad29-config-volume\") pod \"collect-profiles-29401365-x2qcr\" (UID: \"8380637d-e32d-4018-8081-28b3076fad29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.730384 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.747717 4731 request.go:700] Waited for 1.018865748s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dcollect-profiles-dockercfg-kzf4t&limit=500&resourceVersion=0 Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.750180 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.771487 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.786479 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/0b48e8f6-4a4f-40da-8304-69ed663dc05b-profile-collector-cert\") pod \"catalog-operator-68c6474976-tkwxv\" (UID: \"0b48e8f6-4a4f-40da-8304-69ed663dc05b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.788380 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8380637d-e32d-4018-8081-28b3076fad29-secret-volume\") pod \"collect-profiles-29401365-x2qcr\" (UID: \"8380637d-e32d-4018-8081-28b3076fad29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.790085 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.810443 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.810777 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.830218 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.850329 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.854238 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3bfcce86-59b2-4bfb-b8da-9a7eb76e766c-proxy-tls\") pod \"machine-config-operator-74547568cd-5jpdg\" (UID: \"3bfcce86-59b2-4bfb-b8da-9a7eb76e766c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.869270 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.889401 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.894575 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/013533b2-fe47-4b12-9ecf-98705944a1eb-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-qpv96\" (UID: \"013533b2-fe47-4b12-9ecf-98705944a1eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qpv96" Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.910034 4731 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.910733 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c84e85c2-1b8f-4e13-9b6c-9824981d26e8-config podName:c84e85c2-1b8f-4e13-9b6c-9824981d26e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:56:20.4106935 +0000 UTC m=+139.157827950 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c84e85c2-1b8f-4e13-9b6c-9824981d26e8-config") pod "openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" (UID: "c84e85c2-1b8f-4e13-9b6c-9824981d26e8") : failed to sync configmap cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.911349 4731 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.910328 4731 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.912504 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-trusted-ca podName:eb067501-cc83-4cdd-ad52-224774a043c4 nodeName:}" failed. No retries permitted until 2025-11-25 14:56:20.412469456 +0000 UTC m=+139.159603886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-trusted-ca") pod "marketplace-operator-79b997595-t5mpt" (UID: "eb067501-cc83-4cdd-ad52-224774a043c4") : failed to sync configmap cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.912559 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9e2c002e-79b6-44a2-9789-fa2da26f3cd5-signing-cabundle podName:9e2c002e-79b6-44a2-9789-fa2da26f3cd5 nodeName:}" failed. No retries permitted until 2025-11-25 14:56:20.412524397 +0000 UTC m=+139.159658827 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/9e2c002e-79b6-44a2-9789-fa2da26f3cd5-signing-cabundle") pod "service-ca-9c57cc56f-d9dwj" (UID: "9e2c002e-79b6-44a2-9789-fa2da26f3cd5") : failed to sync configmap cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.910605 4731 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.912597 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9e2c002e-79b6-44a2-9789-fa2da26f3cd5-signing-key podName:9e2c002e-79b6-44a2-9789-fa2da26f3cd5 nodeName:}" failed. No retries permitted until 2025-11-25 14:56:20.412589789 +0000 UTC m=+139.159724219 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/9e2c002e-79b6-44a2-9789-fa2da26f3cd5-signing-key") pod "service-ca-9c57cc56f-d9dwj" (UID: "9e2c002e-79b6-44a2-9789-fa2da26f3cd5") : failed to sync secret cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.911083 4731 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.912638 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0b48e8f6-4a4f-40da-8304-69ed663dc05b-srv-cert podName:0b48e8f6-4a4f-40da-8304-69ed663dc05b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:20.41263307 +0000 UTC m=+139.159767500 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/0b48e8f6-4a4f-40da-8304-69ed663dc05b-srv-cert") pod "catalog-operator-68c6474976-tkwxv" (UID: "0b48e8f6-4a4f-40da-8304-69ed663dc05b") : failed to sync secret cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.911120 4731 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.912670 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2e59cf0b-743e-430a-b054-42dc854f20dc-config podName:2e59cf0b-743e-430a-b054-42dc854f20dc nodeName:}" failed. No retries permitted until 2025-11-25 14:56:20.412665671 +0000 UTC m=+139.159800101 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/2e59cf0b-743e-430a-b054-42dc854f20dc-config") pod "service-ca-operator-777779d784-grktp" (UID: "2e59cf0b-743e-430a-b054-42dc854f20dc") : failed to sync configmap cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.911375 4731 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.912747 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2e59cf0b-743e-430a-b054-42dc854f20dc-serving-cert podName:2e59cf0b-743e-430a-b054-42dc854f20dc nodeName:}" failed. No retries permitted until 2025-11-25 14:56:20.412723033 +0000 UTC m=+139.159857463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2e59cf0b-743e-430a-b054-42dc854f20dc-serving-cert") pod "service-ca-operator-777779d784-grktp" (UID: "2e59cf0b-743e-430a-b054-42dc854f20dc") : failed to sync secret cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.911416 4731 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.912804 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-operator-metrics podName:eb067501-cc83-4cdd-ad52-224774a043c4 nodeName:}" failed. No retries permitted until 2025-11-25 14:56:20.412780804 +0000 UTC m=+139.159915234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-operator-metrics") pod "marketplace-operator-79b997595-t5mpt" (UID: "eb067501-cc83-4cdd-ad52-224774a043c4") : failed to sync secret cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.911541 4731 secret.go:188] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: E1125 14:56:19.912888 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c84e85c2-1b8f-4e13-9b6c-9824981d26e8-serving-cert podName:c84e85c2-1b8f-4e13-9b6c-9824981d26e8 nodeName:}" failed. No retries permitted until 2025-11-25 14:56:20.412877527 +0000 UTC m=+139.160011957 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c84e85c2-1b8f-4e13-9b6c-9824981d26e8-serving-cert") pod "openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" (UID: "c84e85c2-1b8f-4e13-9b6c-9824981d26e8") : failed to sync secret cache: timed out waiting for the condition Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.911556 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.929014 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.949496 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.970288 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 14:56:19 crc kubenswrapper[4731]: I1125 14:56:19.997489 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.010563 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.031239 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.049466 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.069677 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.089767 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.110162 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.129967 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.149428 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.168699 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.189736 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.209758 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.229047 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.250810 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.269602 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.289661 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.310105 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.330049 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.349646 4731 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.370085 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.389642 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.429710 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.434879 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e59cf0b-743e-430a-b054-42dc854f20dc-serving-cert\") pod \"service-ca-operator-777779d784-grktp\" (UID: \"2e59cf0b-743e-430a-b054-42dc854f20dc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-grktp" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.434948 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c84e85c2-1b8f-4e13-9b6c-9824981d26e8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l8zwj\" (UID: \"c84e85c2-1b8f-4e13-9b6c-9824981d26e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.435051 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9e2c002e-79b6-44a2-9789-fa2da26f3cd5-signing-cabundle\") pod \"service-ca-9c57cc56f-d9dwj\" (UID: \"9e2c002e-79b6-44a2-9789-fa2da26f3cd5\") " pod="openshift-service-ca/service-ca-9c57cc56f-d9dwj" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.435101 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9e2c002e-79b6-44a2-9789-fa2da26f3cd5-signing-key\") pod \"service-ca-9c57cc56f-d9dwj\" (UID: \"9e2c002e-79b6-44a2-9789-fa2da26f3cd5\") " pod="openshift-service-ca/service-ca-9c57cc56f-d9dwj" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.435203 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0b48e8f6-4a4f-40da-8304-69ed663dc05b-srv-cert\") pod \"catalog-operator-68c6474976-tkwxv\" (UID: \"0b48e8f6-4a4f-40da-8304-69ed663dc05b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.435235 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e59cf0b-743e-430a-b054-42dc854f20dc-config\") pod \"service-ca-operator-777779d784-grktp\" (UID: \"2e59cf0b-743e-430a-b054-42dc854f20dc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-grktp" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.435252 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c84e85c2-1b8f-4e13-9b6c-9824981d26e8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l8zwj\" (UID: \"c84e85c2-1b8f-4e13-9b6c-9824981d26e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.435272 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t5mpt\" (UID: \"eb067501-cc83-4cdd-ad52-224774a043c4\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.435292 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t5mpt\" (UID: \"eb067501-cc83-4cdd-ad52-224774a043c4\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.436420 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c84e85c2-1b8f-4e13-9b6c-9824981d26e8-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l8zwj\" (UID: \"c84e85c2-1b8f-4e13-9b6c-9824981d26e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.438031 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-t5mpt\" (UID: \"eb067501-cc83-4cdd-ad52-224774a043c4\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.438210 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9e2c002e-79b6-44a2-9789-fa2da26f3cd5-signing-cabundle\") pod \"service-ca-9c57cc56f-d9dwj\" (UID: \"9e2c002e-79b6-44a2-9789-fa2da26f3cd5\") " pod="openshift-service-ca/service-ca-9c57cc56f-d9dwj" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.438695 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9e2c002e-79b6-44a2-9789-fa2da26f3cd5-signing-key\") pod \"service-ca-9c57cc56f-d9dwj\" (UID: \"9e2c002e-79b6-44a2-9789-fa2da26f3cd5\") " pod="openshift-service-ca/service-ca-9c57cc56f-d9dwj" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.438722 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-t5mpt\" (UID: \"eb067501-cc83-4cdd-ad52-224774a043c4\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.439394 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e59cf0b-743e-430a-b054-42dc854f20dc-serving-cert\") pod \"service-ca-operator-777779d784-grktp\" (UID: \"2e59cf0b-743e-430a-b054-42dc854f20dc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-grktp" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.439466 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c84e85c2-1b8f-4e13-9b6c-9824981d26e8-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l8zwj\" (UID: \"c84e85c2-1b8f-4e13-9b6c-9824981d26e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.440463 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/0b48e8f6-4a4f-40da-8304-69ed663dc05b-srv-cert\") pod \"catalog-operator-68c6474976-tkwxv\" (UID: \"0b48e8f6-4a4f-40da-8304-69ed663dc05b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.450660 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.460231 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e59cf0b-743e-430a-b054-42dc854f20dc-config\") pod \"service-ca-operator-777779d784-grktp\" (UID: \"2e59cf0b-743e-430a-b054-42dc854f20dc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-grktp" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.470494 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.505323 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7gnj\" (UniqueName: \"kubernetes.io/projected/a0daa5b0-809c-4237-bd24-b1eabc5a4a0e-kube-api-access-n7gnj\") pod \"apiserver-76f77b778f-n47rp\" (UID: \"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e\") " pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.518475 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.525239 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkx8r\" (UniqueName: \"kubernetes.io/projected/02f57d03-52ab-4660-bac1-d57f74e53e0e-kube-api-access-vkx8r\") pod \"cluster-image-registry-operator-dc59b4c8b-mfgp7\" (UID: \"02f57d03-52ab-4660-bac1-d57f74e53e0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.545058 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7966982c-d1bf-409d-a672-6bbaf0d1c954-bound-sa-token\") pod \"ingress-operator-5b745b69d9-4rz42\" (UID: \"7966982c-d1bf-409d-a672-6bbaf0d1c954\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.563458 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-664g5\" (UniqueName: \"kubernetes.io/projected/eea531d8-c1b6-4b38-908e-e385c965d06e-kube-api-access-664g5\") pod \"oauth-openshift-558db77b4-fp5cw\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.580475 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.583225 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfplq\" (UniqueName: \"kubernetes.io/projected/5a457d06-8bf4-4077-a3e9-d25ff971832b-kube-api-access-jfplq\") pod \"console-f9d7485db-dhtk5\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.604041 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/82912045-fbea-4058-b4a4-e462a37d85b7-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-jxnpg\" (UID: \"82912045-fbea-4058-b4a4-e462a37d85b7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.625394 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ddc212cb-2fed-4211-be15-5bd5c25d479a-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tbhxk\" (UID: \"ddc212cb-2fed-4211-be15-5bd5c25d479a\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.647046 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vg7b\" (UniqueName: \"kubernetes.io/projected/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-kube-api-access-6vg7b\") pod \"route-controller-manager-6576b87f9c-p4sns\" (UID: \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.668455 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzhwf\" (UniqueName: \"kubernetes.io/projected/8eccda64-70d4-41d9-83bd-5e49545cb4ce-kube-api-access-zzhwf\") pod \"machine-approver-56656f9798-29qbz\" (UID: \"8eccda64-70d4-41d9-83bd-5e49545cb4ce\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.690457 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/02f57d03-52ab-4660-bac1-d57f74e53e0e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mfgp7\" (UID: \"02f57d03-52ab-4660-bac1-d57f74e53e0e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.704713 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.707409 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvz9h\" (UniqueName: \"kubernetes.io/projected/a6ef9a3c-21db-4567-a83c-16f1836b4393-kube-api-access-pvz9h\") pod \"console-operator-58897d9998-gf6k5\" (UID: \"a6ef9a3c-21db-4567-a83c-16f1836b4393\") " pod="openshift-console-operator/console-operator-58897d9998-gf6k5" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.711623 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.728421 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w4xm\" (UniqueName: \"kubernetes.io/projected/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-kube-api-access-9w4xm\") pod \"controller-manager-879f6c89f-c2m7w\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.742739 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.748226 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnp7x\" (UniqueName: \"kubernetes.io/projected/e3c25ac6-1dcc-4c2e-9983-760215771431-kube-api-access-rnp7x\") pod \"openshift-controller-manager-operator-756b6f6bc6-dhjlm\" (UID: \"e3c25ac6-1dcc-4c2e-9983-760215771431\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.762560 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.767450 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmxz7\" (UniqueName: \"kubernetes.io/projected/39a12893-02aa-41c9-94f9-2c3d11cff3bb-kube-api-access-nmxz7\") pod \"cluster-samples-operator-665b6dd947-4qb6n\" (UID: \"39a12893-02aa-41c9-94f9-2c3d11cff3bb\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qb6n" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.767564 4731 request.go:700] Waited for 1.952856769s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.785281 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6p7lk\" (UniqueName: \"kubernetes.io/projected/5a254a00-2dcd-43cc-8859-fe0e4efed204-kube-api-access-6p7lk\") pod \"authentication-operator-69f744f599-mvw8v\" (UID: \"5a254a00-2dcd-43cc-8859-fe0e4efed204\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.800795 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.808094 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lt5f\" (UniqueName: \"kubernetes.io/projected/7966982c-d1bf-409d-a672-6bbaf0d1c954-kube-api-access-5lt5f\") pod \"ingress-operator-5b745b69d9-4rz42\" (UID: \"7966982c-d1bf-409d-a672-6bbaf0d1c954\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.824442 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fp5cw"] Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.846023 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-n47rp"] Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.877048 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7gzv\" (UniqueName: \"kubernetes.io/projected/8eed496a-1d92-43f0-8571-6667803a45d0-kube-api-access-g7gzv\") pod \"apiserver-7bbb656c7d-8q4qs\" (UID: \"8eed496a-1d92-43f0-8571-6667803a45d0\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.877409 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gf6k5" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.877978 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.879430 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9vtx\" (UniqueName: \"kubernetes.io/projected/b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0-kube-api-access-j9vtx\") pod \"openshift-apiserver-operator-796bbdcf4f-8sl64\" (UID: \"b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.894052 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgr55\" (UniqueName: \"kubernetes.io/projected/fc54c4d0-18b0-4249-9e18-d8a31c0135ae-kube-api-access-qgr55\") pod \"downloads-7954f5f757-nzzxp\" (UID: \"fc54c4d0-18b0-4249-9e18-d8a31c0135ae\") " pod="openshift-console/downloads-7954f5f757-nzzxp" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.900107 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.906184 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.916300 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whphm\" (UniqueName: \"kubernetes.io/projected/89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71-kube-api-access-whphm\") pod \"openshift-config-operator-7777fb866f-qtqcb\" (UID: \"89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.927597 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75nfb\" (UniqueName: \"kubernetes.io/projected/e5ccef3b-b78c-42b7-aa17-b52800f28cc0-kube-api-access-75nfb\") pod \"dns-operator-744455d44c-nrh95\" (UID: \"e5ccef3b-b78c-42b7-aa17-b52800f28cc0\") " pod="openshift-dns-operator/dns-operator-744455d44c-nrh95" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.932299 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.945319 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xkdz\" (UniqueName: \"kubernetes.io/projected/740209e9-0d7f-4da0-b68f-2ef5628039df-kube-api-access-9xkdz\") pod \"machine-api-operator-5694c8668f-vpcm5\" (UID: \"740209e9-0d7f-4da0-b68f-2ef5628039df\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" Nov 25 14:56:20 crc kubenswrapper[4731]: W1125 14:56:20.950543 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeea531d8_c1b6_4b38_908e_e385c965d06e.slice/crio-18468b64520c921ea2a2154532a287ba720f0059429cee4e546aa925bf1cdceb WatchSource:0}: Error finding container 18468b64520c921ea2a2154532a287ba720f0059429cee4e546aa925bf1cdceb: Status 404 returned error can't find the container with id 18468b64520c921ea2a2154532a287ba720f0059429cee4e546aa925bf1cdceb Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.950703 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.968150 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.971040 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.972958 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-nrh95" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.980183 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.990427 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qb6n" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.990863 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.996876 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm" Nov 25 14:56:20 crc kubenswrapper[4731]: I1125 14:56:20.999104 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.009437 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 14:56:21 crc kubenswrapper[4731]: W1125 14:56:21.010809 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podddc212cb_2fed_4211_be15_5bd5c25d479a.slice/crio-1674c8f6a209a065e184db5b0b76f57de87c447e04524937927e632657cbb5d6 WatchSource:0}: Error finding container 1674c8f6a209a065e184db5b0b76f57de87c447e04524937927e632657cbb5d6: Status 404 returned error can't find the container with id 1674c8f6a209a065e184db5b0b76f57de87c447e04524937927e632657cbb5d6 Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.027963 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.029544 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.034791 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.050389 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.069303 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 14:56:21 crc kubenswrapper[4731]: W1125 14:56:21.091175 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82912045_fbea_4058_b4a4_e462a37d85b7.slice/crio-6a4fa0cf24b95438b8c4e5e95bc2ca5a6abc57201922f29012a750c6f7142ace WatchSource:0}: Error finding container 6a4fa0cf24b95438b8c4e5e95bc2ca5a6abc57201922f29012a750c6f7142ace: Status 404 returned error can't find the container with id 6a4fa0cf24b95438b8c4e5e95bc2ca5a6abc57201922f29012a750c6f7142ace Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.091561 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.091936 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-nzzxp" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.103030 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c2m7w"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.115256 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.128956 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.141252 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.146015 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.156148 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.180525 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c84e85c2-1b8f-4e13-9b6c-9824981d26e8-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-l8zwj\" (UID: \"c84e85c2-1b8f-4e13-9b6c-9824981d26e8\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.185175 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zvdm\" (UniqueName: \"kubernetes.io/projected/9e2c002e-79b6-44a2-9789-fa2da26f3cd5-kube-api-access-7zvdm\") pod \"service-ca-9c57cc56f-d9dwj\" (UID: \"9e2c002e-79b6-44a2-9789-fa2da26f3cd5\") " pod="openshift-service-ca/service-ca-9c57cc56f-d9dwj" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.192929 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dhtk5"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.216985 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dc67\" (UniqueName: \"kubernetes.io/projected/013533b2-fe47-4b12-9ecf-98705944a1eb-kube-api-access-8dc67\") pod \"control-plane-machine-set-operator-78cbb6b69f-qpv96\" (UID: \"013533b2-fe47-4b12-9ecf-98705944a1eb\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qpv96" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.229473 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp55p\" (UniqueName: \"kubernetes.io/projected/8380637d-e32d-4018-8081-28b3076fad29-kube-api-access-sp55p\") pod \"collect-profiles-29401365-x2qcr\" (UID: \"8380637d-e32d-4018-8081-28b3076fad29\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" Nov 25 14:56:21 crc kubenswrapper[4731]: W1125 14:56:21.245312 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7852fce_1e0b_46ca_9c1d_9f4fe3af3fca.slice/crio-f055e7a7cf33dc99fd9e67b8bb0d189268e08aa73f09b42b11704d574f269f90 WatchSource:0}: Error finding container f055e7a7cf33dc99fd9e67b8bb0d189268e08aa73f09b42b11704d574f269f90: Status 404 returned error can't find the container with id f055e7a7cf33dc99fd9e67b8bb0d189268e08aa73f09b42b11704d574f269f90 Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.246068 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-mvw8v"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.250728 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv274\" (UniqueName: \"kubernetes.io/projected/eb067501-cc83-4cdd-ad52-224774a043c4-kube-api-access-mv274\") pod \"marketplace-operator-79b997595-t5mpt\" (UID: \"eb067501-cc83-4cdd-ad52-224774a043c4\") " pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.284376 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jcl7\" (UniqueName: \"kubernetes.io/projected/ace20ee2-d2d1-402e-9a3d-1b3f4fb2491c-kube-api-access-7jcl7\") pod \"migrator-59844c95c7-zgzqt\" (UID: \"ace20ee2-d2d1-402e-9a3d-1b3f4fb2491c\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgzqt" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.289083 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5gxj\" (UniqueName: \"kubernetes.io/projected/a160b766-12c1-4549-b892-794fe9009a9e-kube-api-access-x5gxj\") pod \"csi-hostpathplugin-cmr7l\" (UID: \"a160b766-12c1-4549-b892-794fe9009a9e\") " pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:21 crc kubenswrapper[4731]: W1125 14:56:21.309798 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a457d06_8bf4_4077_a3e9_d25ff971832b.slice/crio-0cbe87e914e570842cf4755d02b11f7a770ad6fe1f4973eb0adf005821b3f2be WatchSource:0}: Error finding container 0cbe87e914e570842cf4755d02b11f7a770ad6fe1f4973eb0adf005821b3f2be: Status 404 returned error can't find the container with id 0cbe87e914e570842cf4755d02b11f7a770ad6fe1f4973eb0adf005821b3f2be Nov 25 14:56:21 crc kubenswrapper[4731]: W1125 14:56:21.313934 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a254a00_2dcd_43cc_8859_fe0e4efed204.slice/crio-a331220588dcd80dc991b6d34d64fe41040457477f068ec9a5c1d34ea22e8e1e WatchSource:0}: Error finding container a331220588dcd80dc991b6d34d64fe41040457477f068ec9a5c1d34ea22e8e1e: Status 404 returned error can't find the container with id a331220588dcd80dc991b6d34d64fe41040457477f068ec9a5c1d34ea22e8e1e Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.318326 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf9f9\" (UniqueName: \"kubernetes.io/projected/0b48e8f6-4a4f-40da-8304-69ed663dc05b-kube-api-access-rf9f9\") pod \"catalog-operator-68c6474976-tkwxv\" (UID: \"0b48e8f6-4a4f-40da-8304-69ed663dc05b\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.318600 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgzqt" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.330046 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltjl4\" (UniqueName: \"kubernetes.io/projected/3bfcce86-59b2-4bfb-b8da-9a7eb76e766c-kube-api-access-ltjl4\") pod \"machine-config-operator-74547568cd-5jpdg\" (UID: \"3bfcce86-59b2-4bfb-b8da-9a7eb76e766c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.355465 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4979\" (UniqueName: \"kubernetes.io/projected/80231337-3fb3-499b-a626-48423f329e0b-kube-api-access-q4979\") pod \"multus-admission-controller-857f4d67dd-fjq2t\" (UID: \"80231337-3fb3-499b-a626-48423f329e0b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-fjq2t" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.368589 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvj7m\" (UniqueName: \"kubernetes.io/projected/2e59cf0b-743e-430a-b054-42dc854f20dc-kube-api-access-vvj7m\") pod \"service-ca-operator-777779d784-grktp\" (UID: \"2e59cf0b-743e-430a-b054-42dc854f20dc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-grktp" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.387617 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.389939 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.395869 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.411696 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.416025 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qpv96" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.424102 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.435126 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.435705 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.449345 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.459737 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-grktp" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.463184 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qb6n"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467078 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5k2c\" (UniqueName: \"kubernetes.io/projected/513cf31e-cfbc-4978-aa3a-b99736bdf9ca-kube-api-access-b5k2c\") pod \"package-server-manager-789f6589d5-5ldqz\" (UID: \"513cf31e-cfbc-4978-aa3a-b99736bdf9ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467170 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-registry-certificates\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467202 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15711751-e25f-4d8e-b9d7-1882b5bedd4f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-twjxg\" (UID: \"15711751-e25f-4d8e-b9d7-1882b5bedd4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467228 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50f274c0-be0d-4c02-a83e-b689a572a78e-webhook-cert\") pod \"packageserver-d55dfcdfc-2w858\" (UID: \"50f274c0-be0d-4c02-a83e-b689a572a78e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467297 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1201d935-c98b-429a-b1ef-dd2cf2e3fd5d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qmks6\" (UID: \"1201d935-c98b-429a-b1ef-dd2cf2e3fd5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467348 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-registry-tls\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467394 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms9jm\" (UniqueName: \"kubernetes.io/projected/d81b8c12-b875-47a7-b2ee-c8cac5539c87-kube-api-access-ms9jm\") pod \"router-default-5444994796-sclpc\" (UID: \"d81b8c12-b875-47a7-b2ee-c8cac5539c87\") " pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467454 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d81b8c12-b875-47a7-b2ee-c8cac5539c87-service-ca-bundle\") pod \"router-default-5444994796-sclpc\" (UID: \"d81b8c12-b875-47a7-b2ee-c8cac5539c87\") " pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467482 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dabdd005-1a40-4e3f-80e7-48afcac46987-proxy-tls\") pod \"machine-config-controller-84d6567774-26nlv\" (UID: \"dabdd005-1a40-4e3f-80e7-48afcac46987\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467564 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-594pd\" (UniqueName: \"kubernetes.io/projected/0e9596cb-d3e0-4103-9426-44e57ba1593d-kube-api-access-594pd\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467627 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50f274c0-be0d-4c02-a83e-b689a572a78e-apiservice-cert\") pod \"packageserver-d55dfcdfc-2w858\" (UID: \"50f274c0-be0d-4c02-a83e-b689a572a78e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467695 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-trusted-ca\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467722 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dabdd005-1a40-4e3f-80e7-48afcac46987-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-26nlv\" (UID: \"dabdd005-1a40-4e3f-80e7-48afcac46987\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467802 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1201d935-c98b-429a-b1ef-dd2cf2e3fd5d-srv-cert\") pod \"olm-operator-6b444d44fb-qmks6\" (UID: \"1201d935-c98b-429a-b1ef-dd2cf2e3fd5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467854 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467891 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqbwg\" (UniqueName: \"kubernetes.io/projected/1201d935-c98b-429a-b1ef-dd2cf2e3fd5d-kube-api-access-fqbwg\") pod \"olm-operator-6b444d44fb-qmks6\" (UID: \"1201d935-c98b-429a-b1ef-dd2cf2e3fd5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467928 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-bound-sa-token\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.467981 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e9596cb-d3e0-4103-9426-44e57ba1593d-etcd-service-ca\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.468006 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9596cb-d3e0-4103-9426-44e57ba1593d-config\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.468030 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0e9596cb-d3e0-4103-9426-44e57ba1593d-etcd-client\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.468080 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.468104 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d81b8c12-b875-47a7-b2ee-c8cac5539c87-default-certificate\") pod \"router-default-5444994796-sclpc\" (UID: \"d81b8c12-b875-47a7-b2ee-c8cac5539c87\") " pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.468150 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d81b8c12-b875-47a7-b2ee-c8cac5539c87-metrics-certs\") pod \"router-default-5444994796-sclpc\" (UID: \"d81b8c12-b875-47a7-b2ee-c8cac5539c87\") " pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.468195 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/50f274c0-be0d-4c02-a83e-b689a572a78e-tmpfs\") pod \"packageserver-d55dfcdfc-2w858\" (UID: \"50f274c0-be0d-4c02-a83e-b689a572a78e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.468264 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d81b8c12-b875-47a7-b2ee-c8cac5539c87-stats-auth\") pod \"router-default-5444994796-sclpc\" (UID: \"d81b8c12-b875-47a7-b2ee-c8cac5539c87\") " pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.468293 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.468362 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e9596cb-d3e0-4103-9426-44e57ba1593d-serving-cert\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.468416 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48vqj\" (UniqueName: \"kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-kube-api-access-48vqj\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.468443 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15711751-e25f-4d8e-b9d7-1882b5bedd4f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-twjxg\" (UID: \"15711751-e25f-4d8e-b9d7-1882b5bedd4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.468489 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffxw8\" (UniqueName: \"kubernetes.io/projected/50f274c0-be0d-4c02-a83e-b689a572a78e-kube-api-access-ffxw8\") pod \"packageserver-d55dfcdfc-2w858\" (UID: \"50f274c0-be0d-4c02-a83e-b689a572a78e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.468512 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc5kp\" (UniqueName: \"kubernetes.io/projected/dabdd005-1a40-4e3f-80e7-48afcac46987-kube-api-access-tc5kp\") pod \"machine-config-controller-84d6567774-26nlv\" (UID: \"dabdd005-1a40-4e3f-80e7-48afcac46987\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.468541 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0e9596cb-d3e0-4103-9426-44e57ba1593d-etcd-ca\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.468586 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92wtz\" (UniqueName: \"kubernetes.io/projected/15711751-e25f-4d8e-b9d7-1882b5bedd4f-kube-api-access-92wtz\") pod \"kube-storage-version-migrator-operator-b67b599dd-twjxg\" (UID: \"15711751-e25f-4d8e-b9d7-1882b5bedd4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.468645 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/513cf31e-cfbc-4978-aa3a-b99736bdf9ca-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5ldqz\" (UID: \"513cf31e-cfbc-4978-aa3a-b99736bdf9ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz" Nov 25 14:56:21 crc kubenswrapper[4731]: E1125 14:56:21.472114 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:21.972083796 +0000 UTC m=+140.719218226 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.476467 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-d9dwj" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.479309 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gf6k5"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.493686 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.569931 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.570203 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d81b8c12-b875-47a7-b2ee-c8cac5539c87-stats-auth\") pod \"router-default-5444994796-sclpc\" (UID: \"d81b8c12-b875-47a7-b2ee-c8cac5539c87\") " pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.570252 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.570302 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e9596cb-d3e0-4103-9426-44e57ba1593d-serving-cert\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.570349 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48vqj\" (UniqueName: \"kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-kube-api-access-48vqj\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.570379 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15711751-e25f-4d8e-b9d7-1882b5bedd4f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-twjxg\" (UID: \"15711751-e25f-4d8e-b9d7-1882b5bedd4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.570455 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffxw8\" (UniqueName: \"kubernetes.io/projected/50f274c0-be0d-4c02-a83e-b689a572a78e-kube-api-access-ffxw8\") pod \"packageserver-d55dfcdfc-2w858\" (UID: \"50f274c0-be0d-4c02-a83e-b689a572a78e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.570474 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc5kp\" (UniqueName: \"kubernetes.io/projected/dabdd005-1a40-4e3f-80e7-48afcac46987-kube-api-access-tc5kp\") pod \"machine-config-controller-84d6567774-26nlv\" (UID: \"dabdd005-1a40-4e3f-80e7-48afcac46987\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.570499 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0e9596cb-d3e0-4103-9426-44e57ba1593d-etcd-ca\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: E1125 14:56:21.570708 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:22.070673578 +0000 UTC m=+140.817808008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.571906 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92wtz\" (UniqueName: \"kubernetes.io/projected/15711751-e25f-4d8e-b9d7-1882b5bedd4f-kube-api-access-92wtz\") pod \"kube-storage-version-migrator-operator-b67b599dd-twjxg\" (UID: \"15711751-e25f-4d8e-b9d7-1882b5bedd4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.572105 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/513cf31e-cfbc-4978-aa3a-b99736bdf9ca-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5ldqz\" (UID: \"513cf31e-cfbc-4978-aa3a-b99736bdf9ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.572180 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/913937a0-8c0d-451d-b313-ed104a651929-metrics-tls\") pod \"dns-default-bspt9\" (UID: \"913937a0-8c0d-451d-b313-ed104a651929\") " pod="openshift-dns/dns-default-bspt9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.572237 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn8cp\" (UniqueName: \"kubernetes.io/projected/557361ba-d1ec-4592-b8be-33cfba5f8d78-kube-api-access-wn8cp\") pod \"ingress-canary-bqsdf\" (UID: \"557361ba-d1ec-4592-b8be-33cfba5f8d78\") " pod="openshift-ingress-canary/ingress-canary-bqsdf" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.572298 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5k2c\" (UniqueName: \"kubernetes.io/projected/513cf31e-cfbc-4978-aa3a-b99736bdf9ca-kube-api-access-b5k2c\") pod \"package-server-manager-789f6589d5-5ldqz\" (UID: \"513cf31e-cfbc-4978-aa3a-b99736bdf9ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.572326 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/913937a0-8c0d-451d-b313-ed104a651929-config-volume\") pod \"dns-default-bspt9\" (UID: \"913937a0-8c0d-451d-b313-ed104a651929\") " pod="openshift-dns/dns-default-bspt9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.572406 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-registry-certificates\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.572428 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15711751-e25f-4d8e-b9d7-1882b5bedd4f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-twjxg\" (UID: \"15711751-e25f-4d8e-b9d7-1882b5bedd4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.572452 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50f274c0-be0d-4c02-a83e-b689a572a78e-webhook-cert\") pod \"packageserver-d55dfcdfc-2w858\" (UID: \"50f274c0-be0d-4c02-a83e-b689a572a78e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.572583 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1201d935-c98b-429a-b1ef-dd2cf2e3fd5d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qmks6\" (UID: \"1201d935-c98b-429a-b1ef-dd2cf2e3fd5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.572604 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/22bfd188-d7a7-44f7-8d9b-c69886faf3a7-certs\") pod \"machine-config-server-jghzg\" (UID: \"22bfd188-d7a7-44f7-8d9b-c69886faf3a7\") " pod="openshift-machine-config-operator/machine-config-server-jghzg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.572703 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-registry-tls\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.575162 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-registry-certificates\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.575307 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms9jm\" (UniqueName: \"kubernetes.io/projected/d81b8c12-b875-47a7-b2ee-c8cac5539c87-kube-api-access-ms9jm\") pod \"router-default-5444994796-sclpc\" (UID: \"d81b8c12-b875-47a7-b2ee-c8cac5539c87\") " pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.575382 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d81b8c12-b875-47a7-b2ee-c8cac5539c87-service-ca-bundle\") pod \"router-default-5444994796-sclpc\" (UID: \"d81b8c12-b875-47a7-b2ee-c8cac5539c87\") " pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.575407 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/22bfd188-d7a7-44f7-8d9b-c69886faf3a7-node-bootstrap-token\") pod \"machine-config-server-jghzg\" (UID: \"22bfd188-d7a7-44f7-8d9b-c69886faf3a7\") " pod="openshift-machine-config-operator/machine-config-server-jghzg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.575453 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dabdd005-1a40-4e3f-80e7-48afcac46987-proxy-tls\") pod \"machine-config-controller-84d6567774-26nlv\" (UID: \"dabdd005-1a40-4e3f-80e7-48afcac46987\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.575472 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/557361ba-d1ec-4592-b8be-33cfba5f8d78-cert\") pod \"ingress-canary-bqsdf\" (UID: \"557361ba-d1ec-4592-b8be-33cfba5f8d78\") " pod="openshift-ingress-canary/ingress-canary-bqsdf" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.575513 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-594pd\" (UniqueName: \"kubernetes.io/projected/0e9596cb-d3e0-4103-9426-44e57ba1593d-kube-api-access-594pd\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.576284 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50f274c0-be0d-4c02-a83e-b689a572a78e-apiservice-cert\") pod \"packageserver-d55dfcdfc-2w858\" (UID: \"50f274c0-be0d-4c02-a83e-b689a572a78e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.576324 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-trusted-ca\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.576355 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dabdd005-1a40-4e3f-80e7-48afcac46987-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-26nlv\" (UID: \"dabdd005-1a40-4e3f-80e7-48afcac46987\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.576524 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1201d935-c98b-429a-b1ef-dd2cf2e3fd5d-srv-cert\") pod \"olm-operator-6b444d44fb-qmks6\" (UID: \"1201d935-c98b-429a-b1ef-dd2cf2e3fd5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.576565 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc2sg\" (UniqueName: \"kubernetes.io/projected/913937a0-8c0d-451d-b313-ed104a651929-kube-api-access-wc2sg\") pod \"dns-default-bspt9\" (UID: \"913937a0-8c0d-451d-b313-ed104a651929\") " pod="openshift-dns/dns-default-bspt9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.576612 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.576636 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqbwg\" (UniqueName: \"kubernetes.io/projected/1201d935-c98b-429a-b1ef-dd2cf2e3fd5d-kube-api-access-fqbwg\") pod \"olm-operator-6b444d44fb-qmks6\" (UID: \"1201d935-c98b-429a-b1ef-dd2cf2e3fd5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.576678 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-bound-sa-token\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.576741 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9596cb-d3e0-4103-9426-44e57ba1593d-config\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.576761 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e9596cb-d3e0-4103-9426-44e57ba1593d-etcd-service-ca\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.576803 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0e9596cb-d3e0-4103-9426-44e57ba1593d-etcd-client\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.576849 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66k6h\" (UniqueName: \"kubernetes.io/projected/22bfd188-d7a7-44f7-8d9b-c69886faf3a7-kube-api-access-66k6h\") pod \"machine-config-server-jghzg\" (UID: \"22bfd188-d7a7-44f7-8d9b-c69886faf3a7\") " pod="openshift-machine-config-operator/machine-config-server-jghzg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.578267 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.578302 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d81b8c12-b875-47a7-b2ee-c8cac5539c87-default-certificate\") pod \"router-default-5444994796-sclpc\" (UID: \"d81b8c12-b875-47a7-b2ee-c8cac5539c87\") " pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.578341 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d81b8c12-b875-47a7-b2ee-c8cac5539c87-metrics-certs\") pod \"router-default-5444994796-sclpc\" (UID: \"d81b8c12-b875-47a7-b2ee-c8cac5539c87\") " pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.578361 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/50f274c0-be0d-4c02-a83e-b689a572a78e-tmpfs\") pod \"packageserver-d55dfcdfc-2w858\" (UID: \"50f274c0-be0d-4c02-a83e-b689a572a78e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.584652 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dabdd005-1a40-4e3f-80e7-48afcac46987-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-26nlv\" (UID: \"dabdd005-1a40-4e3f-80e7-48afcac46987\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.587044 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d81b8c12-b875-47a7-b2ee-c8cac5539c87-stats-auth\") pod \"router-default-5444994796-sclpc\" (UID: \"d81b8c12-b875-47a7-b2ee-c8cac5539c87\") " pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.587075 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.587043 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/0e9596cb-d3e0-4103-9426-44e57ba1593d-etcd-ca\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.587269 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/15711751-e25f-4d8e-b9d7-1882b5bedd4f-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-twjxg\" (UID: \"15711751-e25f-4d8e-b9d7-1882b5bedd4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.588132 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e9596cb-d3e0-4103-9426-44e57ba1593d-etcd-service-ca\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: E1125 14:56:21.588501 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:22.088474941 +0000 UTC m=+140.835609361 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.588596 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nrh95"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.590012 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/15711751-e25f-4d8e-b9d7-1882b5bedd4f-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-twjxg\" (UID: \"15711751-e25f-4d8e-b9d7-1882b5bedd4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.590524 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/50f274c0-be0d-4c02-a83e-b689a572a78e-tmpfs\") pod \"packageserver-d55dfcdfc-2w858\" (UID: \"50f274c0-be0d-4c02-a83e-b689a572a78e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.590888 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e9596cb-d3e0-4103-9426-44e57ba1593d-config\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.590921 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-ca-trust-extracted\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.590943 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d81b8c12-b875-47a7-b2ee-c8cac5539c87-service-ca-bundle\") pod \"router-default-5444994796-sclpc\" (UID: \"d81b8c12-b875-47a7-b2ee-c8cac5539c87\") " pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.594148 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/50f274c0-be0d-4c02-a83e-b689a572a78e-webhook-cert\") pod \"packageserver-d55dfcdfc-2w858\" (UID: \"50f274c0-be0d-4c02-a83e-b689a572a78e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.595096 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-installation-pull-secrets\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.597142 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/1201d935-c98b-429a-b1ef-dd2cf2e3fd5d-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qmks6\" (UID: \"1201d935-c98b-429a-b1ef-dd2cf2e3fd5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.609200 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-trusted-ca\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.614919 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/1201d935-c98b-429a-b1ef-dd2cf2e3fd5d-srv-cert\") pod \"olm-operator-6b444d44fb-qmks6\" (UID: \"1201d935-c98b-429a-b1ef-dd2cf2e3fd5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.615679 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dabdd005-1a40-4e3f-80e7-48afcac46987-proxy-tls\") pod \"machine-config-controller-84d6567774-26nlv\" (UID: \"dabdd005-1a40-4e3f-80e7-48afcac46987\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.615717 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/50f274c0-be0d-4c02-a83e-b689a572a78e-apiservice-cert\") pod \"packageserver-d55dfcdfc-2w858\" (UID: \"50f274c0-be0d-4c02-a83e-b689a572a78e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.622759 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d81b8c12-b875-47a7-b2ee-c8cac5539c87-metrics-certs\") pod \"router-default-5444994796-sclpc\" (UID: \"d81b8c12-b875-47a7-b2ee-c8cac5539c87\") " pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.635151 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e9596cb-d3e0-4103-9426-44e57ba1593d-serving-cert\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.635326 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0e9596cb-d3e0-4103-9426-44e57ba1593d-etcd-client\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.635346 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d81b8c12-b875-47a7-b2ee-c8cac5539c87-default-certificate\") pod \"router-default-5444994796-sclpc\" (UID: \"d81b8c12-b875-47a7-b2ee-c8cac5539c87\") " pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.635552 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-registry-tls\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.638591 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92wtz\" (UniqueName: \"kubernetes.io/projected/15711751-e25f-4d8e-b9d7-1882b5bedd4f-kube-api-access-92wtz\") pod \"kube-storage-version-migrator-operator-b67b599dd-twjxg\" (UID: \"15711751-e25f-4d8e-b9d7-1882b5bedd4f\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.638848 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/513cf31e-cfbc-4978-aa3a-b99736bdf9ca-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5ldqz\" (UID: \"513cf31e-cfbc-4978-aa3a-b99736bdf9ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.643459 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-fjq2t" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.654182 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48vqj\" (UniqueName: \"kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-kube-api-access-48vqj\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.664788 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" event={"ID":"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5","Type":"ContainerStarted","Data":"cf107b996c9122cecbafa2c6761201f01abcb2c5977968fc1e3d2aac66d212bc"} Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.671286 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.677498 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk" event={"ID":"ddc212cb-2fed-4211-be15-5bd5c25d479a","Type":"ContainerStarted","Data":"1674c8f6a209a065e184db5b0b76f57de87c447e04524937927e632657cbb5d6"} Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.679166 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.679449 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/913937a0-8c0d-451d-b313-ed104a651929-config-volume\") pod \"dns-default-bspt9\" (UID: \"913937a0-8c0d-451d-b313-ed104a651929\") " pod="openshift-dns/dns-default-bspt9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.679484 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/22bfd188-d7a7-44f7-8d9b-c69886faf3a7-certs\") pod \"machine-config-server-jghzg\" (UID: \"22bfd188-d7a7-44f7-8d9b-c69886faf3a7\") " pod="openshift-machine-config-operator/machine-config-server-jghzg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.679521 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/22bfd188-d7a7-44f7-8d9b-c69886faf3a7-node-bootstrap-token\") pod \"machine-config-server-jghzg\" (UID: \"22bfd188-d7a7-44f7-8d9b-c69886faf3a7\") " pod="openshift-machine-config-operator/machine-config-server-jghzg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.679537 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/557361ba-d1ec-4592-b8be-33cfba5f8d78-cert\") pod \"ingress-canary-bqsdf\" (UID: \"557361ba-d1ec-4592-b8be-33cfba5f8d78\") " pod="openshift-ingress-canary/ingress-canary-bqsdf" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.679684 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc2sg\" (UniqueName: \"kubernetes.io/projected/913937a0-8c0d-451d-b313-ed104a651929-kube-api-access-wc2sg\") pod \"dns-default-bspt9\" (UID: \"913937a0-8c0d-451d-b313-ed104a651929\") " pod="openshift-dns/dns-default-bspt9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.679747 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66k6h\" (UniqueName: \"kubernetes.io/projected/22bfd188-d7a7-44f7-8d9b-c69886faf3a7-kube-api-access-66k6h\") pod \"machine-config-server-jghzg\" (UID: \"22bfd188-d7a7-44f7-8d9b-c69886faf3a7\") " pod="openshift-machine-config-operator/machine-config-server-jghzg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.679809 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/913937a0-8c0d-451d-b313-ed104a651929-metrics-tls\") pod \"dns-default-bspt9\" (UID: \"913937a0-8c0d-451d-b313-ed104a651929\") " pod="openshift-dns/dns-default-bspt9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.679866 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wn8cp\" (UniqueName: \"kubernetes.io/projected/557361ba-d1ec-4592-b8be-33cfba5f8d78-kube-api-access-wn8cp\") pod \"ingress-canary-bqsdf\" (UID: \"557361ba-d1ec-4592-b8be-33cfba5f8d78\") " pod="openshift-ingress-canary/ingress-canary-bqsdf" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.681680 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dhtk5" event={"ID":"5a457d06-8bf4-4077-a3e9-d25ff971832b","Type":"ContainerStarted","Data":"0cbe87e914e570842cf4755d02b11f7a770ad6fe1f4973eb0adf005821b3f2be"} Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.682368 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-594pd\" (UniqueName: \"kubernetes.io/projected/0e9596cb-d3e0-4103-9426-44e57ba1593d-kube-api-access-594pd\") pod \"etcd-operator-b45778765-lscm9\" (UID: \"0e9596cb-d3e0-4103-9426-44e57ba1593d\") " pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: E1125 14:56:21.683755 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:22.183728457 +0000 UTC m=+140.930862887 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.684610 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/913937a0-8c0d-451d-b313-ed104a651929-config-volume\") pod \"dns-default-bspt9\" (UID: \"913937a0-8c0d-451d-b313-ed104a651929\") " pod="openshift-dns/dns-default-bspt9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.685277 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" event={"ID":"8eccda64-70d4-41d9-83bd-5e49545cb4ce","Type":"ContainerStarted","Data":"c0bd07f88c5f2d797509b0b33fcc403c708faebcd6db70f2303985e5adc37f00"} Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.685324 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" event={"ID":"8eccda64-70d4-41d9-83bd-5e49545cb4ce","Type":"ContainerStarted","Data":"2e0322ec6d42f97e2fe31902eb44163d6e66bb21178a91e0203e03f32a80af4b"} Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.685490 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/22bfd188-d7a7-44f7-8d9b-c69886faf3a7-node-bootstrap-token\") pod \"machine-config-server-jghzg\" (UID: \"22bfd188-d7a7-44f7-8d9b-c69886faf3a7\") " pod="openshift-machine-config-operator/machine-config-server-jghzg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.686729 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/557361ba-d1ec-4592-b8be-33cfba5f8d78-cert\") pod \"ingress-canary-bqsdf\" (UID: \"557361ba-d1ec-4592-b8be-33cfba5f8d78\") " pod="openshift-ingress-canary/ingress-canary-bqsdf" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.687377 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/22bfd188-d7a7-44f7-8d9b-c69886faf3a7-certs\") pod \"machine-config-server-jghzg\" (UID: \"22bfd188-d7a7-44f7-8d9b-c69886faf3a7\") " pod="openshift-machine-config-operator/machine-config-server-jghzg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.697331 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/913937a0-8c0d-451d-b313-ed104a651929-metrics-tls\") pod \"dns-default-bspt9\" (UID: \"913937a0-8c0d-451d-b313-ed104a651929\") " pod="openshift-dns/dns-default-bspt9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.698967 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64" event={"ID":"b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0","Type":"ContainerStarted","Data":"c539f60fbeb8e2e22f5f529f21dba5899c5428b5caec99dde87f88b6450869a3"} Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.699156 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms9jm\" (UniqueName: \"kubernetes.io/projected/d81b8c12-b875-47a7-b2ee-c8cac5539c87-kube-api-access-ms9jm\") pod \"router-default-5444994796-sclpc\" (UID: \"d81b8c12-b875-47a7-b2ee-c8cac5539c87\") " pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.702232 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg" event={"ID":"82912045-fbea-4058-b4a4-e462a37d85b7","Type":"ContainerStarted","Data":"6a4fa0cf24b95438b8c4e5e95bc2ca5a6abc57201922f29012a750c6f7142ace"} Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.708040 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" event={"ID":"eea531d8-c1b6-4b38-908e-e385c965d06e","Type":"ContainerStarted","Data":"18468b64520c921ea2a2154532a287ba720f0059429cee4e546aa925bf1cdceb"} Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.709612 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" event={"ID":"5a254a00-2dcd-43cc-8859-fe0e4efed204","Type":"ContainerStarted","Data":"a331220588dcd80dc991b6d34d64fe41040457477f068ec9a5c1d34ea22e8e1e"} Nov 25 14:56:21 crc kubenswrapper[4731]: W1125 14:56:21.711090 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02f57d03_52ab_4660_bac1_d57f74e53e0e.slice/crio-0478d5bf6f285b7d75a0eec5e1d0b445476981904df0a19407f95f1674ef0126 WatchSource:0}: Error finding container 0478d5bf6f285b7d75a0eec5e1d0b445476981904df0a19407f95f1674ef0126: Status 404 returned error can't find the container with id 0478d5bf6f285b7d75a0eec5e1d0b445476981904df0a19407f95f1674ef0126 Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.711398 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n47rp" event={"ID":"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e","Type":"ContainerStarted","Data":"e0a6c6e40fad3ae21686d0d7c3dee9081e28d5a4221a074dc18af07cf5819963"} Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.715220 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffxw8\" (UniqueName: \"kubernetes.io/projected/50f274c0-be0d-4c02-a83e-b689a572a78e-kube-api-access-ffxw8\") pod \"packageserver-d55dfcdfc-2w858\" (UID: \"50f274c0-be0d-4c02-a83e-b689a572a78e\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.718889 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" event={"ID":"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca","Type":"ContainerStarted","Data":"f055e7a7cf33dc99fd9e67b8bb0d189268e08aa73f09b42b11704d574f269f90"} Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.723262 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gf6k5" event={"ID":"a6ef9a3c-21db-4567-a83c-16f1836b4393","Type":"ContainerStarted","Data":"40aeb5709a1755c7bcdedf109b2c0623542dc90190b738a81043644ec3e0c4c7"} Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.724913 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc5kp\" (UniqueName: \"kubernetes.io/projected/dabdd005-1a40-4e3f-80e7-48afcac46987-kube-api-access-tc5kp\") pod \"machine-config-controller-84d6567774-26nlv\" (UID: \"dabdd005-1a40-4e3f-80e7-48afcac46987\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.747554 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqbwg\" (UniqueName: \"kubernetes.io/projected/1201d935-c98b-429a-b1ef-dd2cf2e3fd5d-kube-api-access-fqbwg\") pod \"olm-operator-6b444d44fb-qmks6\" (UID: \"1201d935-c98b-429a-b1ef-dd2cf2e3fd5d\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.769461 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.770247 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-bound-sa-token\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.782244 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: E1125 14:56:21.783474 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:22.283452048 +0000 UTC m=+141.030586468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.796177 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5k2c\" (UniqueName: \"kubernetes.io/projected/513cf31e-cfbc-4978-aa3a-b99736bdf9ca-kube-api-access-b5k2c\") pod \"package-server-manager-789f6589d5-5ldqz\" (UID: \"513cf31e-cfbc-4978-aa3a-b99736bdf9ca\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.808008 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qpv96"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.824164 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vpcm5"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.854196 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.865761 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc2sg\" (UniqueName: \"kubernetes.io/projected/913937a0-8c0d-451d-b313-ed104a651929-kube-api-access-wc2sg\") pod \"dns-default-bspt9\" (UID: \"913937a0-8c0d-451d-b313-ed104a651929\") " pod="openshift-dns/dns-default-bspt9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.867943 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn8cp\" (UniqueName: \"kubernetes.io/projected/557361ba-d1ec-4592-b8be-33cfba5f8d78-kube-api-access-wn8cp\") pod \"ingress-canary-bqsdf\" (UID: \"557361ba-d1ec-4592-b8be-33cfba5f8d78\") " pod="openshift-ingress-canary/ingress-canary-bqsdf" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.868080 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66k6h\" (UniqueName: \"kubernetes.io/projected/22bfd188-d7a7-44f7-8d9b-c69886faf3a7-kube-api-access-66k6h\") pod \"machine-config-server-jghzg\" (UID: \"22bfd188-d7a7-44f7-8d9b-c69886faf3a7\") " pod="openshift-machine-config-operator/machine-config-server-jghzg" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.888515 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:21 crc kubenswrapper[4731]: E1125 14:56:21.889127 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:22.389094093 +0000 UTC m=+141.136228663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.902811 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-zgzqt"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.932273 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.947098 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.949433 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-nzzxp"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.961810 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.962257 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.978582 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.987648 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm"] Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.990145 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:21 crc kubenswrapper[4731]: E1125 14:56:21.990568 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:22.49054977 +0000 UTC m=+141.237684200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:21 crc kubenswrapper[4731]: I1125 14:56:21.992935 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-cmr7l"] Nov 25 14:56:22 crc kubenswrapper[4731]: W1125 14:56:22.004110 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod740209e9_0d7f_4da0_b68f_2ef5628039df.slice/crio-9a85e44cce386dc33d18fcf53fd6aaab99ab472e37b1b551ca5174948c4daaa7 WatchSource:0}: Error finding container 9a85e44cce386dc33d18fcf53fd6aaab99ab472e37b1b551ca5174948c4daaa7: Status 404 returned error can't find the container with id 9a85e44cce386dc33d18fcf53fd6aaab99ab472e37b1b551ca5174948c4daaa7 Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.005953 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz" Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.041783 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" Nov 25 14:56:22 crc kubenswrapper[4731]: W1125 14:56:22.049626 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod013533b2_fe47_4b12_9ecf_98705944a1eb.slice/crio-fb7402eecf72266a95de4785ce7a6a22986354989e43e9c90131676a0e015add WatchSource:0}: Error finding container fb7402eecf72266a95de4785ce7a6a22986354989e43e9c90131676a0e015add: Status 404 returned error can't find the container with id fb7402eecf72266a95de4785ce7a6a22986354989e43e9c90131676a0e015add Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.060701 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg"] Nov 25 14:56:22 crc kubenswrapper[4731]: W1125 14:56:22.065377 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89d23cbd_5da9_4ae6_94eb_bd9cdc20ec71.slice/crio-cd6151376b9406ea11625fe987708c908bf85851bd88c8274dee0418b6706b3e WatchSource:0}: Error finding container cd6151376b9406ea11625fe987708c908bf85851bd88c8274dee0418b6706b3e: Status 404 returned error can't find the container with id cd6151376b9406ea11625fe987708c908bf85851bd88c8274dee0418b6706b3e Nov 25 14:56:22 crc kubenswrapper[4731]: W1125 14:56:22.071752 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc54c4d0_18b0_4249_9e18_d8a31c0135ae.slice/crio-7ae109e6bc391d4256884ee90686cc5860edf4ebd24d66dd73d16729ac06c05b WatchSource:0}: Error finding container 7ae109e6bc391d4256884ee90686cc5860edf4ebd24d66dd73d16729ac06c05b: Status 404 returned error can't find the container with id 7ae109e6bc391d4256884ee90686cc5860edf4ebd24d66dd73d16729ac06c05b Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.091400 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:22 crc kubenswrapper[4731]: E1125 14:56:22.092478 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:22.592436788 +0000 UTC m=+141.339571218 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.107685 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-bspt9" Nov 25 14:56:22 crc kubenswrapper[4731]: W1125 14:56:22.113039 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode3c25ac6_1dcc_4c2e_9983_760215771431.slice/crio-a87208fd6dae38274dc9002e9dc28551f34c8ba753c897f83d7eaad0a803b114 WatchSource:0}: Error finding container a87208fd6dae38274dc9002e9dc28551f34c8ba753c897f83d7eaad0a803b114: Status 404 returned error can't find the container with id a87208fd6dae38274dc9002e9dc28551f34c8ba753c897f83d7eaad0a803b114 Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.144233 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-bqsdf" Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.150543 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-jghzg" Nov 25 14:56:22 crc kubenswrapper[4731]: W1125 14:56:22.192956 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bfcce86_59b2_4bfb_b8da_9a7eb76e766c.slice/crio-5fc0a15b43feccdd38d48839a908410c0e8d74be1039ded91a9b2dd26e407c34 WatchSource:0}: Error finding container 5fc0a15b43feccdd38d48839a908410c0e8d74be1039ded91a9b2dd26e407c34: Status 404 returned error can't find the container with id 5fc0a15b43feccdd38d48839a908410c0e8d74be1039ded91a9b2dd26e407c34 Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.193420 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-grktp"] Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.194973 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:22 crc kubenswrapper[4731]: E1125 14:56:22.195331 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:22.695318371 +0000 UTC m=+141.442452801 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.207124 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv"] Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.255749 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t5mpt"] Nov 25 14:56:22 crc kubenswrapper[4731]: W1125 14:56:22.269065 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e59cf0b_743e_430a_b054_42dc854f20dc.slice/crio-6ae7f298048d0a65db5f329c470790a61e0840a189981bfa83e71f41ac761cfd WatchSource:0}: Error finding container 6ae7f298048d0a65db5f329c470790a61e0840a189981bfa83e71f41ac761cfd: Status 404 returned error can't find the container with id 6ae7f298048d0a65db5f329c470790a61e0840a189981bfa83e71f41ac761cfd Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.299214 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:22 crc kubenswrapper[4731]: E1125 14:56:22.299796 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:22.799776336 +0000 UTC m=+141.546910766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.312636 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr"] Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.314734 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj"] Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.401101 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:22 crc kubenswrapper[4731]: E1125 14:56:22.401428 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:22.901417567 +0000 UTC m=+141.648551997 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.503478 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:22 crc kubenswrapper[4731]: E1125 14:56:22.504268 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:23.00424897 +0000 UTC m=+141.751383410 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.564726 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-d9dwj"] Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.580591 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-fjq2t"] Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.605906 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:22 crc kubenswrapper[4731]: E1125 14:56:22.606483 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:23.106454356 +0000 UTC m=+141.853588956 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.644954 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6"] Nov 25 14:56:22 crc kubenswrapper[4731]: E1125 14:56:22.708113 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:23.208058996 +0000 UTC m=+141.955193426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.707801 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.709352 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:22 crc kubenswrapper[4731]: E1125 14:56:22.709900 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:23.209885354 +0000 UTC m=+141.957019774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.751289 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" event={"ID":"a160b766-12c1-4549-b892-794fe9009a9e","Type":"ContainerStarted","Data":"f461ff99928e48177fd2a99b6ae29721816e188104e4a7b2c692d8b35df64026"} Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.752926 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz"] Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.758805 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" event={"ID":"8380637d-e32d-4018-8081-28b3076fad29","Type":"ContainerStarted","Data":"cf59963683d10918fbadddbfd01e2272f2576bcac120673430d6d47a3c49a82d"} Nov 25 14:56:22 crc kubenswrapper[4731]: W1125 14:56:22.763531 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e2c002e_79b6_44a2_9789_fa2da26f3cd5.slice/crio-7e72a9eb7eff4669bb0b55801c4462a70f653238fb67a88511e94810ba82fadb WatchSource:0}: Error finding container 7e72a9eb7eff4669bb0b55801c4462a70f653238fb67a88511e94810ba82fadb: Status 404 returned error can't find the container with id 7e72a9eb7eff4669bb0b55801c4462a70f653238fb67a88511e94810ba82fadb Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.764228 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" event={"ID":"c84e85c2-1b8f-4e13-9b6c-9824981d26e8","Type":"ContainerStarted","Data":"41124cbb70bd42e38399f7eddf51e77894933c4bec7999c7d97aa5afc54d5115"} Nov 25 14:56:22 crc kubenswrapper[4731]: W1125 14:56:22.785646 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod513cf31e_cfbc_4978_aa3a_b99736bdf9ca.slice/crio-1364b5c8985b636e832383b40ae980f9b2b90772c7595807d6bf2eaef1f329f4 WatchSource:0}: Error finding container 1364b5c8985b636e832383b40ae980f9b2b90772c7595807d6bf2eaef1f329f4: Status 404 returned error can't find the container with id 1364b5c8985b636e832383b40ae980f9b2b90772c7595807d6bf2eaef1f329f4 Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.792555 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nrh95" event={"ID":"e5ccef3b-b78c-42b7-aa17-b52800f28cc0","Type":"ContainerStarted","Data":"8da38edf3ef1e5b2e37366d35f0621c09acac987962214beb16b38681636b5b0"} Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.802920 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" event={"ID":"0b48e8f6-4a4f-40da-8304-69ed663dc05b","Type":"ContainerStarted","Data":"1760e9340a68f4daaaee150d5b73053bfb16b906e1391c710e82d21ff1ebd7a6"} Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.804170 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" event={"ID":"02f57d03-52ab-4660-bac1-d57f74e53e0e","Type":"ContainerStarted","Data":"898780b7e49d92b80bda9c79fe2cf4d0893a8eeab32922bba38099998647b2d7"} Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.804198 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" event={"ID":"02f57d03-52ab-4660-bac1-d57f74e53e0e","Type":"ContainerStarted","Data":"0478d5bf6f285b7d75a0eec5e1d0b445476981904df0a19407f95f1674ef0126"} Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.808928 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" event={"ID":"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5","Type":"ContainerStarted","Data":"f9bab356c3eef296b4be5dd3c44532bfb94644f6b4913f9c5bc904635b302ed2"} Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.810090 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.810379 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:22 crc kubenswrapper[4731]: E1125 14:56:22.810879 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:23.310860908 +0000 UTC m=+142.057995328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.828866 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qpv96" event={"ID":"013533b2-fe47-4b12-9ecf-98705944a1eb","Type":"ContainerStarted","Data":"fb7402eecf72266a95de4785ce7a6a22986354989e43e9c90131676a0e015add"} Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.828929 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" event={"ID":"89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71","Type":"ContainerStarted","Data":"cd6151376b9406ea11625fe987708c908bf85851bd88c8274dee0418b6706b3e"} Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.830684 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv"] Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.832471 4731 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-p4sns container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.832624 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" podUID="afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Nov 25 14:56:22 crc kubenswrapper[4731]: W1125 14:56:22.850899 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80231337_3fb3_499b_a626_48423f329e0b.slice/crio-dcc638de950c5a1091310cb840645c7d7e73df294cf8cb120de8a6a9c1479e32 WatchSource:0}: Error finding container dcc638de950c5a1091310cb840645c7d7e73df294cf8cb120de8a6a9c1479e32: Status 404 returned error can't find the container with id dcc638de950c5a1091310cb840645c7d7e73df294cf8cb120de8a6a9c1479e32 Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.851531 4731 generic.go:334] "Generic (PLEG): container finished" podID="a0daa5b0-809c-4237-bd24-b1eabc5a4a0e" containerID="0170117c1d05eaa359cd7e76647b8ef159779b059d44ac7381bfc2ab0774101e" exitCode=0 Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.851994 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n47rp" event={"ID":"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e","Type":"ContainerDied","Data":"0170117c1d05eaa359cd7e76647b8ef159779b059d44ac7381bfc2ab0774101e"} Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.855568 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" event={"ID":"7966982c-d1bf-409d-a672-6bbaf0d1c954","Type":"ContainerStarted","Data":"cf646e2017f6468f2b97c62a2f0464ff903146f46bed8535badeecb2ed2765cf"} Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.856719 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg"] Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.863178 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk" event={"ID":"ddc212cb-2fed-4211-be15-5bd5c25d479a","Type":"ContainerStarted","Data":"fbbbaa519f13dc36f236c2ce3d4df93d3b776b44c5c3674de4c81d33ebefa324"} Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.881065 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nzzxp" event={"ID":"fc54c4d0-18b0-4249-9e18-d8a31c0135ae","Type":"ContainerStarted","Data":"7ae109e6bc391d4256884ee90686cc5860edf4ebd24d66dd73d16729ac06c05b"} Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.938651 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" event={"ID":"eb067501-cc83-4cdd-ad52-224774a043c4","Type":"ContainerStarted","Data":"8748d2f508415e09f66e93d3a324abef1e94a93f0ecee0c1efa7a43f17cbe35d"} Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.948650 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:22 crc kubenswrapper[4731]: E1125 14:56:22.950700 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:23.450683772 +0000 UTC m=+142.197818202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.956955 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" event={"ID":"8eed496a-1d92-43f0-8571-6667803a45d0","Type":"ContainerStarted","Data":"211af20aeacc8df6361b3f01b78f07858c44afbe3a2c501d4e553a392bb57785"} Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.964195 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64" event={"ID":"b0a72ef0-689b-4f4c-a94f-c1f84e8fc5d0","Type":"ContainerStarted","Data":"08b2e04b7cbcbc9af3d9660af921a90a7215d323a13bcfc1aaaa31f3faf4d07d"} Nov 25 14:56:22 crc kubenswrapper[4731]: I1125 14:56:22.988669 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" event={"ID":"3bfcce86-59b2-4bfb-b8da-9a7eb76e766c","Type":"ContainerStarted","Data":"5fc0a15b43feccdd38d48839a908410c0e8d74be1039ded91a9b2dd26e407c34"} Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.001352 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qb6n" event={"ID":"39a12893-02aa-41c9-94f9-2c3d11cff3bb","Type":"ContainerStarted","Data":"92d59ba0c7c78ea831fe252bdb3e21127bff7439407bdb3a024d6f9ad7278042"} Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.015836 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" event={"ID":"5a254a00-2dcd-43cc-8859-fe0e4efed204","Type":"ContainerStarted","Data":"d2493ecf41c95348a5e93cde4bbc11db3edeb0445455ea87c558e7c4bf4b1d08"} Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.031747 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm" event={"ID":"e3c25ac6-1dcc-4c2e-9983-760215771431","Type":"ContainerStarted","Data":"a87208fd6dae38274dc9002e9dc28551f34c8ba753c897f83d7eaad0a803b114"} Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.050779 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.051020 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858"] Nov 25 14:56:23 crc kubenswrapper[4731]: E1125 14:56:23.052527 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:23.552497877 +0000 UTC m=+142.299632487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.065289 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-lscm9"] Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.072181 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" event={"ID":"8eccda64-70d4-41d9-83bd-5e49545cb4ce","Type":"ContainerStarted","Data":"3d91f615ca5611e50e3ad77d2481ff0ad76c0901274cbee806822460fca984bf"} Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.085165 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgzqt" event={"ID":"ace20ee2-d2d1-402e-9a3d-1b3f4fb2491c","Type":"ContainerStarted","Data":"6697c1d2f7a0e79be86a29ebe4a82ef41a444dedab63035ffb805f51110f265e"} Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.139598 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg" event={"ID":"82912045-fbea-4058-b4a4-e462a37d85b7","Type":"ContainerStarted","Data":"b75bfb5c7c1b16a0c64d681174b6fa9ddc8825d7ac1df4ad8b5650fb341cf502"} Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.147962 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-grktp" event={"ID":"2e59cf0b-743e-430a-b054-42dc854f20dc","Type":"ContainerStarted","Data":"6ae7f298048d0a65db5f329c470790a61e0840a189981bfa83e71f41ac761cfd"} Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.149792 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-bqsdf"] Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.157093 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:23 crc kubenswrapper[4731]: E1125 14:56:23.157701 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:23.657678551 +0000 UTC m=+142.404812981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.161795 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gf6k5" event={"ID":"a6ef9a3c-21db-4567-a83c-16f1836b4393","Type":"ContainerStarted","Data":"44464dc8bcff637fc256c7f2a756fad9d6f42a915ede718cf2523f48b55bc98d"} Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.162430 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gf6k5" Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.166001 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" event={"ID":"740209e9-0d7f-4da0-b68f-2ef5628039df","Type":"ContainerStarted","Data":"9a85e44cce386dc33d18fcf53fd6aaab99ab472e37b1b551ca5174948c4daaa7"} Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.173645 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" event={"ID":"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca","Type":"ContainerStarted","Data":"6cadeaa45c0d6e944ce2eb9e18dc756386698656c0e2862b8c51f1e947a465a6"} Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.174340 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.184381 4731 patch_prober.go:28] interesting pod/console-operator-58897d9998-gf6k5 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.184418 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gf6k5" podUID="a6ef9a3c-21db-4567-a83c-16f1836b4393" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.188327 4731 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-c2m7w container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.188381 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" podUID="f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.209378 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" event={"ID":"eea531d8-c1b6-4b38-908e-e385c965d06e","Type":"ContainerStarted","Data":"99d4d85a43ffeb2782f5248a08ff0c9dc3c54928e2c7e18db735cd45a8d972e4"} Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.212551 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.217200 4731 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fp5cw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.217262 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" podUID="eea531d8-c1b6-4b38-908e-e385c965d06e" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.224220 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dhtk5" event={"ID":"5a457d06-8bf4-4077-a3e9-d25ff971832b","Type":"ContainerStarted","Data":"e469a26dec6f6c2a823b1b774c101979b33e112778d5c6039b572a9f6aaf9a26"} Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.259623 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:23 crc kubenswrapper[4731]: E1125 14:56:23.260543 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:23.760523033 +0000 UTC m=+142.507657463 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.281842 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:23 crc kubenswrapper[4731]: E1125 14:56:23.286837 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:23.786783326 +0000 UTC m=+142.533917756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.289390 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-bspt9"] Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.349146 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-29qbz" podStartSLOduration=117.349109566 podStartE2EDuration="1m57.349109566s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:23.341341404 +0000 UTC m=+142.088475834" watchObservedRunningTime="2025-11-25 14:56:23.349109566 +0000 UTC m=+142.096243996" Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.371830 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" podStartSLOduration=116.371791845 podStartE2EDuration="1m56.371791845s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:23.370603924 +0000 UTC m=+142.117738354" watchObservedRunningTime="2025-11-25 14:56:23.371791845 +0000 UTC m=+142.118926275" Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.394483 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:23 crc kubenswrapper[4731]: E1125 14:56:23.397186 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:23.897141974 +0000 UTC m=+142.644276404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.486728 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-dhtk5" podStartSLOduration=117.48670562 podStartE2EDuration="1m57.48670562s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:23.485741675 +0000 UTC m=+142.232876105" watchObservedRunningTime="2025-11-25 14:56:23.48670562 +0000 UTC m=+142.233840050" Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.496346 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:23 crc kubenswrapper[4731]: E1125 14:56:23.496713 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:23.99669835 +0000 UTC m=+142.743832780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.524965 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gf6k5" podStartSLOduration=117.524935154 podStartE2EDuration="1m57.524935154s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:23.521897695 +0000 UTC m=+142.269032125" watchObservedRunningTime="2025-11-25 14:56:23.524935154 +0000 UTC m=+142.272069584" Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.598263 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:23 crc kubenswrapper[4731]: E1125 14:56:23.599404 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:24.099380839 +0000 UTC m=+142.846515269 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.628499 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" podStartSLOduration=117.628472905 podStartE2EDuration="1m57.628472905s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:23.627747276 +0000 UTC m=+142.374881726" watchObservedRunningTime="2025-11-25 14:56:23.628472905 +0000 UTC m=+142.375607335" Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.689793 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mfgp7" podStartSLOduration=117.689738997 podStartE2EDuration="1m57.689738997s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:23.650583889 +0000 UTC m=+142.397718329" watchObservedRunningTime="2025-11-25 14:56:23.689738997 +0000 UTC m=+142.436873417" Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.700255 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:23 crc kubenswrapper[4731]: E1125 14:56:23.700649 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:24.2006366 +0000 UTC m=+142.947771030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:23 crc kubenswrapper[4731]: E1125 14:56:23.838444 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8eed496a_1d92_43f0_8571_6667803a45d0.slice/crio-02194012917d3e05a9df8f80893504eb1b712004431a7c9fc90faf6a0edc65df.scope\": RecentStats: unable to find data in memory cache]" Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.839466 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:23 crc kubenswrapper[4731]: E1125 14:56:23.839916 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:24.339898769 +0000 UTC m=+143.087033199 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.897833 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-8sl64" podStartSLOduration=117.897787863 podStartE2EDuration="1m57.897787863s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:23.888591934 +0000 UTC m=+142.635726364" watchObservedRunningTime="2025-11-25 14:56:23.897787863 +0000 UTC m=+142.644922293" Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.944499 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tbhxk" podStartSLOduration=116.944481817 podStartE2EDuration="1m56.944481817s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:23.941765766 +0000 UTC m=+142.688900196" watchObservedRunningTime="2025-11-25 14:56:23.944481817 +0000 UTC m=+142.691616247" Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.946862 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:23 crc kubenswrapper[4731]: E1125 14:56:23.947326 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:24.44730735 +0000 UTC m=+143.194441780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:23 crc kubenswrapper[4731]: I1125 14:56:23.987463 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-jxnpg" podStartSLOduration=116.987442233 podStartE2EDuration="1m56.987442233s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:23.980235606 +0000 UTC m=+142.727370036" watchObservedRunningTime="2025-11-25 14:56:23.987442233 +0000 UTC m=+142.734576663" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.056308 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:24 crc kubenswrapper[4731]: E1125 14:56:24.056736 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:24.556721014 +0000 UTC m=+143.303855444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.086913 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-mvw8v" podStartSLOduration=118.086894418 podStartE2EDuration="1m58.086894418s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:24.085913372 +0000 UTC m=+142.833047802" watchObservedRunningTime="2025-11-25 14:56:24.086894418 +0000 UTC m=+142.834028848" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.089190 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" podStartSLOduration=118.089182487 podStartE2EDuration="1m58.089182487s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:24.054425884 +0000 UTC m=+142.801560314" watchObservedRunningTime="2025-11-25 14:56:24.089182487 +0000 UTC m=+142.836316917" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.157751 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:24 crc kubenswrapper[4731]: E1125 14:56:24.158176 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:24.65815909 +0000 UTC m=+143.405293530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.238288 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qpv96" event={"ID":"013533b2-fe47-4b12-9ecf-98705944a1eb","Type":"ContainerStarted","Data":"acda7b7aa9e001e32662486c3b59ac8668ca2bb1180aa620fb184f38cc02892f"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.250868 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm" event={"ID":"e3c25ac6-1dcc-4c2e-9983-760215771431","Type":"ContainerStarted","Data":"c59f717d15bbb07983fb166e5d25dc468ab6924c5891ac867caecc0abe3af376"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.257235 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg" event={"ID":"15711751-e25f-4d8e-b9d7-1882b5bedd4f","Type":"ContainerStarted","Data":"7c21b8ea595928f34410d5ce12f118867a1e9f49821d629a503b10c9374d5aff"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.260332 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:24 crc kubenswrapper[4731]: E1125 14:56:24.260683 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:24.760665834 +0000 UTC m=+143.507800264 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.261983 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" event={"ID":"50f274c0-be0d-4c02-a83e-b689a572a78e","Type":"ContainerStarted","Data":"258e2055490444346bb15a16f970b28196f15a57e64f3155f408fbd9a4122ccb"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.262023 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" event={"ID":"50f274c0-be0d-4c02-a83e-b689a572a78e","Type":"ContainerStarted","Data":"d22bda1bfdb11764f9cd59cf8e650a80a50574c45d3036ebba1639829cce1f73"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.262560 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.265484 4731 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2w858 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.265527 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" podUID="50f274c0-be0d-4c02-a83e-b689a572a78e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.266907 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" event={"ID":"8380637d-e32d-4018-8081-28b3076fad29","Type":"ContainerStarted","Data":"3f29d25ea98a8a078e02e86fe0303d3b903e873ea82880d9c7a7a06d31824d4e"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.275057 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-qpv96" podStartSLOduration=117.275041337 podStartE2EDuration="1m57.275041337s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:24.271663219 +0000 UTC m=+143.018797649" watchObservedRunningTime="2025-11-25 14:56:24.275041337 +0000 UTC m=+143.022175767" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.295181 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz" event={"ID":"513cf31e-cfbc-4978-aa3a-b99736bdf9ca","Type":"ContainerStarted","Data":"1364b5c8985b636e832383b40ae980f9b2b90772c7595807d6bf2eaef1f329f4"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.306913 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" podStartSLOduration=118.306895525 podStartE2EDuration="1m58.306895525s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:24.30477336 +0000 UTC m=+143.051907790" watchObservedRunningTime="2025-11-25 14:56:24.306895525 +0000 UTC m=+143.054029955" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.329459 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" event={"ID":"eb067501-cc83-4cdd-ad52-224774a043c4","Type":"ContainerStarted","Data":"74525b6fba75c77f1efd4243a14d374adb63ac2f0ff84bd21ac1f720c0a7d7e2"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.331090 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.355083 4731 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-t5mpt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.355157 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" podUID="eb067501-cc83-4cdd-ad52-224774a043c4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.357495 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dhjlm" podStartSLOduration=118.357465249 podStartE2EDuration="1m58.357465249s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:24.355477988 +0000 UTC m=+143.102612418" watchObservedRunningTime="2025-11-25 14:56:24.357465249 +0000 UTC m=+143.104599679" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.363354 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:24 crc kubenswrapper[4731]: E1125 14:56:24.364942 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:24.864923473 +0000 UTC m=+143.612058113 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.390504 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-grktp" event={"ID":"2e59cf0b-743e-430a-b054-42dc854f20dc","Type":"ContainerStarted","Data":"97b387a7ad48d696897bc38ffdc7c96a22bb157f117acfc0be1f639c130c352a"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.410618 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" podStartSLOduration=117.41060271 podStartE2EDuration="1m57.41060271s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:24.406924024 +0000 UTC m=+143.154058464" watchObservedRunningTime="2025-11-25 14:56:24.41060271 +0000 UTC m=+143.157737150" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.431225 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" event={"ID":"3bfcce86-59b2-4bfb-b8da-9a7eb76e766c","Type":"ContainerStarted","Data":"ed05cd159cd8ed13358567d7c26b75e138f8104f68afee5a36377600ffbba566"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.449864 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bspt9" event={"ID":"913937a0-8c0d-451d-b313-ed104a651929","Type":"ContainerStarted","Data":"5e078211ad838a6439b2169dc5f319fe2dcfbbcc702ea01e2b547e332c7efa49"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.464183 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" event={"ID":"0e9596cb-d3e0-4103-9426-44e57ba1593d","Type":"ContainerStarted","Data":"5ad32599c5c4f157c75862627fddaab72a29ea1d2c05447f8c99b6b442bff68b"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.464826 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:24 crc kubenswrapper[4731]: E1125 14:56:24.466091 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:24.966072332 +0000 UTC m=+143.713206762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.492035 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nrh95" event={"ID":"e5ccef3b-b78c-42b7-aa17-b52800f28cc0","Type":"ContainerStarted","Data":"245a353dc734d83e74090ba15c0ff31f378ce764a9566b315b8fec7383a7b08b"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.522263 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" podStartSLOduration=117.522235591 podStartE2EDuration="1m57.522235591s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:24.439512211 +0000 UTC m=+143.186646641" watchObservedRunningTime="2025-11-25 14:56:24.522235591 +0000 UTC m=+143.269370021" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.522957 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" event={"ID":"7966982c-d1bf-409d-a672-6bbaf0d1c954","Type":"ContainerStarted","Data":"f33306d6c54331cfa98db9850b84f0f0fbfbcb4586beee07083f3e55a8a187f0"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.523883 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-grktp" podStartSLOduration=117.523876994 podStartE2EDuration="1m57.523876994s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:24.521445831 +0000 UTC m=+143.268580261" watchObservedRunningTime="2025-11-25 14:56:24.523876994 +0000 UTC m=+143.271011424" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.529596 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nzzxp" event={"ID":"fc54c4d0-18b0-4249-9e18-d8a31c0135ae","Type":"ContainerStarted","Data":"b0a1c04442e371bedc0c7ffbf9545aaff16bb5b05c1b4a559db0b68892b4dab0"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.530319 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-nzzxp" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.550050 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.550116 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.563181 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" event={"ID":"0b48e8f6-4a4f-40da-8304-69ed663dc05b","Type":"ContainerStarted","Data":"45594105f8d39cf9a62349b1d8eb7c820b7803c478ea9b691017fb4868843ea9"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.579773 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.581917 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:24 crc kubenswrapper[4731]: E1125 14:56:24.586933 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:25.086911192 +0000 UTC m=+143.834045622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.596649 4731 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-tkwxv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.596741 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" podUID="0b48e8f6-4a4f-40da-8304-69ed663dc05b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.623993 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-nzzxp" podStartSLOduration=118.616553162 podStartE2EDuration="1m58.616553162s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:24.610877465 +0000 UTC m=+143.358011895" watchObservedRunningTime="2025-11-25 14:56:24.616553162 +0000 UTC m=+143.363687592" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.625546 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jghzg" event={"ID":"22bfd188-d7a7-44f7-8d9b-c69886faf3a7","Type":"ContainerStarted","Data":"508425903c69cb5c22e3582fec7f6918cecd40cc57300f112693a4cc317554c5"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.639846 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qb6n" event={"ID":"39a12893-02aa-41c9-94f9-2c3d11cff3bb","Type":"ContainerStarted","Data":"591dce71d3c7490cc41410c0bfda8c7be9d6b6ec0c1f708082ea6e8562a6a331"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.664129 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fjq2t" event={"ID":"80231337-3fb3-499b-a626-48423f329e0b","Type":"ContainerStarted","Data":"dcc638de950c5a1091310cb840645c7d7e73df294cf8cb120de8a6a9c1479e32"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.683609 4731 generic.go:334] "Generic (PLEG): container finished" podID="8eed496a-1d92-43f0-8571-6667803a45d0" containerID="02194012917d3e05a9df8f80893504eb1b712004431a7c9fc90faf6a0edc65df" exitCode=0 Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.684615 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" event={"ID":"8eed496a-1d92-43f0-8571-6667803a45d0","Type":"ContainerDied","Data":"02194012917d3e05a9df8f80893504eb1b712004431a7c9fc90faf6a0edc65df"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.684495 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" podStartSLOduration=117.684469747 podStartE2EDuration="1m57.684469747s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:24.663952924 +0000 UTC m=+143.411087354" watchObservedRunningTime="2025-11-25 14:56:24.684469747 +0000 UTC m=+143.431604177" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.684887 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:24 crc kubenswrapper[4731]: E1125 14:56:24.685544 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:25.185524775 +0000 UTC m=+143.932659205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.685957 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:24 crc kubenswrapper[4731]: E1125 14:56:24.687278 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:25.18726858 +0000 UTC m=+143.934403010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.688077 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-jghzg" podStartSLOduration=6.688069121 podStartE2EDuration="6.688069121s" podCreationTimestamp="2025-11-25 14:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:24.683551123 +0000 UTC m=+143.430685553" watchObservedRunningTime="2025-11-25 14:56:24.688069121 +0000 UTC m=+143.435203551" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.717152 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" event={"ID":"c84e85c2-1b8f-4e13-9b6c-9824981d26e8","Type":"ContainerStarted","Data":"8e2ab18e66cf7acc59cb6149ad64a50506b61ebf164ff8b51d335ebc7b2e9a9d"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.744149 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv" event={"ID":"dabdd005-1a40-4e3f-80e7-48afcac46987","Type":"ContainerStarted","Data":"7be436dfc4377f161d02c109907a6d1fd121541f74ea8098f5ea4c99cdbf0f2b"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.747223 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-l8zwj" podStartSLOduration=117.747200507 podStartE2EDuration="1m57.747200507s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:24.74421785 +0000 UTC m=+143.491352280" watchObservedRunningTime="2025-11-25 14:56:24.747200507 +0000 UTC m=+143.494334937" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.754989 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" event={"ID":"1201d935-c98b-429a-b1ef-dd2cf2e3fd5d","Type":"ContainerStarted","Data":"245488d6cd2d3f1c2a0d37e0de1dec14ff18907c12659cb46b5f195b9e0329ee"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.755451 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.760357 4731 generic.go:334] "Generic (PLEG): container finished" podID="89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71" containerID="bc89145323f323fa8c44826534513695dbd1a53113a1b21d98cfe6d9185b92fe" exitCode=0 Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.760899 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" event={"ID":"89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71","Type":"ContainerDied","Data":"bc89145323f323fa8c44826534513695dbd1a53113a1b21d98cfe6d9185b92fe"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.761445 4731 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qmks6 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.761486 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" podUID="1201d935-c98b-429a-b1ef-dd2cf2e3fd5d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.765643 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bqsdf" event={"ID":"557361ba-d1ec-4592-b8be-33cfba5f8d78","Type":"ContainerStarted","Data":"f7a14de8665cdb42224d58e1b3954ee3f190103db3b5e2941a0a39ce0cc63504"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.782675 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" podStartSLOduration=117.782607768 podStartE2EDuration="1m57.782607768s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:24.781187021 +0000 UTC m=+143.528321471" watchObservedRunningTime="2025-11-25 14:56:24.782607768 +0000 UTC m=+143.529742208" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.790185 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" event={"ID":"740209e9-0d7f-4da0-b68f-2ef5628039df","Type":"ContainerStarted","Data":"c0397f3f1637cd982c5f55cfbec712e9ab19be1652e24c36f586f7a1c10a99dd"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.791069 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:24 crc kubenswrapper[4731]: E1125 14:56:24.792892 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:25.292857294 +0000 UTC m=+144.039991884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.834754 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-sclpc" event={"ID":"d81b8c12-b875-47a7-b2ee-c8cac5539c87","Type":"ContainerStarted","Data":"0f9c97906e2a388db04b1d02046c24a4806497b16a3a7ae388e264f110d9da48"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.834803 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-sclpc" event={"ID":"d81b8c12-b875-47a7-b2ee-c8cac5539c87","Type":"ContainerStarted","Data":"85f2275ffb718b195ca3a1f5c845ef037468fc91bcec1e6d98a762c94dca7cfb"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.834929 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-d9dwj" event={"ID":"9e2c002e-79b6-44a2-9789-fa2da26f3cd5","Type":"ContainerStarted","Data":"7e72a9eb7eff4669bb0b55801c4462a70f653238fb67a88511e94810ba82fadb"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.838430 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgzqt" event={"ID":"ace20ee2-d2d1-402e-9a3d-1b3f4fb2491c","Type":"ContainerStarted","Data":"5d03336460514d16d18f109fc8619ea0979c7267ff3e281251d61c8eb6ed1ed2"} Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.846353 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.846617 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.865040 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-sclpc" podStartSLOduration=117.865017029 podStartE2EDuration="1m57.865017029s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:24.864127036 +0000 UTC m=+143.611261466" watchObservedRunningTime="2025-11-25 14:56:24.865017029 +0000 UTC m=+143.612151459" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.893199 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:24 crc kubenswrapper[4731]: E1125 14:56:24.895606 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:25.395592104 +0000 UTC m=+144.142726534 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.979781 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.984965 4731 patch_prober.go:28] interesting pod/router-default-5444994796-sclpc container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.985014 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sclpc" podUID="d81b8c12-b875-47a7-b2ee-c8cac5539c87" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.995211 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:24 crc kubenswrapper[4731]: E1125 14:56:24.995483 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:25.495441839 +0000 UTC m=+144.242576259 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:24 crc kubenswrapper[4731]: I1125 14:56:24.995742 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:24 crc kubenswrapper[4731]: E1125 14:56:24.998782 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:25.498757875 +0000 UTC m=+144.245892475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.097544 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:25 crc kubenswrapper[4731]: E1125 14:56:25.098341 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:25.598319862 +0000 UTC m=+144.345454292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.180194 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gf6k5" Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.200488 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:25 crc kubenswrapper[4731]: E1125 14:56:25.201381 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:25.701361 +0000 UTC m=+144.448495430 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.201448 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.308219 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:25 crc kubenswrapper[4731]: E1125 14:56:25.308779 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:25.808753951 +0000 UTC m=+144.555888391 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.418367 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:25 crc kubenswrapper[4731]: E1125 14:56:25.418858 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:25.918842802 +0000 UTC m=+144.665977232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.523038 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:25 crc kubenswrapper[4731]: E1125 14:56:25.523833 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:26.023799039 +0000 UTC m=+144.770933469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.625539 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:25 crc kubenswrapper[4731]: E1125 14:56:25.626082 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:26.126063437 +0000 UTC m=+144.873197857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.726110 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:25 crc kubenswrapper[4731]: E1125 14:56:25.726383 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:26.226343523 +0000 UTC m=+144.973478093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.726543 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:25 crc kubenswrapper[4731]: E1125 14:56:25.727079 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:26.227071082 +0000 UTC m=+144.974205512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.827453 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:25 crc kubenswrapper[4731]: E1125 14:56:25.827648 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:26.327596404 +0000 UTC m=+145.074730834 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.828143 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:25 crc kubenswrapper[4731]: E1125 14:56:25.828602 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:26.32859156 +0000 UTC m=+145.075726170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.854366 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg" event={"ID":"15711751-e25f-4d8e-b9d7-1882b5bedd4f","Type":"ContainerStarted","Data":"8836eb623e333d2b1b61d7c3fc318f0c18883cec0c4bf1b4edd8c96a92c195fd"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.869926 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bspt9" event={"ID":"913937a0-8c0d-451d-b313-ed104a651929","Type":"ContainerStarted","Data":"a7ca5a2a7056e2fd01c91740d8eda1d9e1475ab61599e2258670376433f90dbe"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.869993 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-bspt9" event={"ID":"913937a0-8c0d-451d-b313-ed104a651929","Type":"ContainerStarted","Data":"34e13ff8da19f381e13bbf3571a5f6a4f3b9ecbadf425c9544a8ec614ad318c7"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.879889 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" event={"ID":"740209e9-0d7f-4da0-b68f-2ef5628039df","Type":"ContainerStarted","Data":"51c0454eeb94e28b5b74f60778c7b901c45564be5d5a05df90055a92f7ac44b1"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.882898 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" event={"ID":"7966982c-d1bf-409d-a672-6bbaf0d1c954","Type":"ContainerStarted","Data":"2602ac29f5707f7f4cd6cdd5595d138bccd60963dbe23732ddfd58ca9bd4b9c0"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.887018 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qb6n" event={"ID":"39a12893-02aa-41c9-94f9-2c3d11cff3bb","Type":"ContainerStarted","Data":"b5b9c410779c67d1cee8ec8c89291d9ed632dfae14b2d51bc70fe0079f4c285d"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.889650 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgzqt" event={"ID":"ace20ee2-d2d1-402e-9a3d-1b3f4fb2491c","Type":"ContainerStarted","Data":"1ddf7fef7ae5126b7ec43dc17b03dc6e4ca06bce40797010af5361e329c015cb"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.894478 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" event={"ID":"8eed496a-1d92-43f0-8571-6667803a45d0","Type":"ContainerStarted","Data":"1cee03f458c3e9c0752729fcf0e0020ecfe5eaec267a41a21554b460a4035b66"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.899139 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" event={"ID":"89d23cbd-5da9-4ae6-94eb-bd9cdc20ec71","Type":"ContainerStarted","Data":"4e3d8656533e5e722d6f2c5cb5d7f98f2e22f341b8b9d0cbf4650e4764b68b54"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.899213 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.902648 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" event={"ID":"1201d935-c98b-429a-b1ef-dd2cf2e3fd5d","Type":"ContainerStarted","Data":"af6f46f7f07a44c766a7fae0b517b625b853c210f9e8f6aca8327e2853321de1"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.904199 4731 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qmks6 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.904278 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" podUID="1201d935-c98b-429a-b1ef-dd2cf2e3fd5d" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.906258 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n47rp" event={"ID":"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e","Type":"ContainerStarted","Data":"41c36542f94ee201e65121017784534d20ff1ad46da89ffcf9070d79bfece2f7"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.906307 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-n47rp" event={"ID":"a0daa5b0-809c-4237-bd24-b1eabc5a4a0e","Type":"ContainerStarted","Data":"56597d8e6a76d67ab07c04e178f3d0d3d0ed380af2ddbdee82fbbd1077c2055e"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.908448 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" event={"ID":"0e9596cb-d3e0-4103-9426-44e57ba1593d","Type":"ContainerStarted","Data":"5d596ad9f0293f9ba7a460ee6ac4082d7339d02b5a886bbdd637038f2465927e"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.911432 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fjq2t" event={"ID":"80231337-3fb3-499b-a626-48423f329e0b","Type":"ContainerStarted","Data":"9af8eb3c941eab7a69c372fa560a9d66949fbf419f2df335649f8c6704fdf7c8"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.911511 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-fjq2t" event={"ID":"80231337-3fb3-499b-a626-48423f329e0b","Type":"ContainerStarted","Data":"184603c8b9c05e0adb1be164101c8b862ca1de15f5c54adf3427d20b57c55706"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.914938 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv" event={"ID":"dabdd005-1a40-4e3f-80e7-48afcac46987","Type":"ContainerStarted","Data":"324a73456ab0c36027709c950214c6fd6089aa49beffdfe77ad7bc903a219249"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.915001 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv" event={"ID":"dabdd005-1a40-4e3f-80e7-48afcac46987","Type":"ContainerStarted","Data":"557a0900d62d0b9504744acff92c82c03dc34ee6421cb73fa17b69c573811d74"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.916626 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-twjxg" podStartSLOduration=118.916603987 podStartE2EDuration="1m58.916603987s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:25.915226391 +0000 UTC m=+144.662360821" watchObservedRunningTime="2025-11-25 14:56:25.916603987 +0000 UTC m=+144.663738417" Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.918062 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz" event={"ID":"513cf31e-cfbc-4978-aa3a-b99736bdf9ca","Type":"ContainerStarted","Data":"756994bd9b2c7c5eaf9ca05e37b428e1a093b25e8188aa6573020556a6ab8d1b"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.918130 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz" event={"ID":"513cf31e-cfbc-4978-aa3a-b99736bdf9ca","Type":"ContainerStarted","Data":"4e0df71e91cd371678b219a3ff6ad4bea11d11afe70c6aa3c6a034515006387d"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.919129 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz" Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.927845 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-jghzg" event={"ID":"22bfd188-d7a7-44f7-8d9b-c69886faf3a7","Type":"ContainerStarted","Data":"6eabf25cfb3cbec909521b8b411ae8de99404167f7d7880c7c4539ad107210f8"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.929481 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:25 crc kubenswrapper[4731]: E1125 14:56:25.929637 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:26.429615785 +0000 UTC m=+145.176750215 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.930488 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:25 crc kubenswrapper[4731]: E1125 14:56:25.933679 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:26.43365214 +0000 UTC m=+145.180786650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.951897 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nrh95" event={"ID":"e5ccef3b-b78c-42b7-aa17-b52800f28cc0","Type":"ContainerStarted","Data":"40e73c622a93bf92916db2d0c2b4145efafc2d5a37f4dde054e2f706af092ab3"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.957151 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-lscm9" podStartSLOduration=119.95712284 podStartE2EDuration="1m59.95712284s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:25.956043862 +0000 UTC m=+144.703178292" watchObservedRunningTime="2025-11-25 14:56:25.95712284 +0000 UTC m=+144.704257270" Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.975172 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" event={"ID":"3bfcce86-59b2-4bfb-b8da-9a7eb76e766c","Type":"ContainerStarted","Data":"ac040c4985acdeff973a96767f06b79ad0700c8c2b85cd8cc71c8b888ecf4514"} Nov 25 14:56:25 crc kubenswrapper[4731]: I1125 14:56:25.996411 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-d9dwj" event={"ID":"9e2c002e-79b6-44a2-9789-fa2da26f3cd5","Type":"ContainerStarted","Data":"83ebf67e5159b2cfed79205958d59da36da79badffccd0f83513ea447924d787"} Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.012396 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" event={"ID":"a160b766-12c1-4549-b892-794fe9009a9e","Type":"ContainerStarted","Data":"58cdd75c6c8a485bf4ffd4bd564d56ccd4fe89106127fc70b70f8e90a44d32c1"} Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.023368 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-bqsdf" event={"ID":"557361ba-d1ec-4592-b8be-33cfba5f8d78","Type":"ContainerStarted","Data":"ed936ba2271394688b5166b4fd195b7ede62a962679a58397e4bc2040c7a0f08"} Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.025395 4731 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-tkwxv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.025441 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" podUID="0b48e8f6-4a4f-40da-8304-69ed663dc05b" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.026353 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.026383 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.026468 4731 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2w858 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.026557 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" podUID="50f274c0-be0d-4c02-a83e-b689a572a78e" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.027282 4731 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-t5mpt container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" start-of-body= Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.027364 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" podUID="eb067501-cc83-4cdd-ad52-224774a043c4" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.30:8080/healthz\": dial tcp 10.217.0.30:8080: connect: connection refused" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.043143 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-fjq2t" podStartSLOduration=119.043114215 podStartE2EDuration="1m59.043114215s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:26.003121866 +0000 UTC m=+144.750256296" watchObservedRunningTime="2025-11-25 14:56:26.043114215 +0000 UTC m=+144.790248645" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.043769 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:26 crc kubenswrapper[4731]: E1125 14:56:26.044183 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:26.544163312 +0000 UTC m=+145.291297742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.046019 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-4rz42" podStartSLOduration=120.04601295 podStartE2EDuration="2m0.04601295s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:26.042572011 +0000 UTC m=+144.789706441" watchObservedRunningTime="2025-11-25 14:56:26.04601295 +0000 UTC m=+144.793147380" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.135332 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" podStartSLOduration=119.135300451 podStartE2EDuration="1m59.135300451s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:26.134160361 +0000 UTC m=+144.881294791" watchObservedRunningTime="2025-11-25 14:56:26.135300451 +0000 UTC m=+144.882434881" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.136012 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-zgzqt" podStartSLOduration=119.136006329 podStartE2EDuration="1m59.136006329s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:26.095336322 +0000 UTC m=+144.842470752" watchObservedRunningTime="2025-11-25 14:56:26.136006329 +0000 UTC m=+144.883140769" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.145844 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.147976 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.148355 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:26 crc kubenswrapper[4731]: E1125 14:56:26.151265 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:26.651238025 +0000 UTC m=+145.398372455 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.174022 4731 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-8q4qs container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.174116 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" podUID="8eed496a-1d92-43f0-8571-6667803a45d0" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.7:8443/livez\": dial tcp 10.217.0.7:8443: connect: connection refused" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.174909 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" podStartSLOduration=120.174886789 podStartE2EDuration="2m0.174886789s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:26.17299272 +0000 UTC m=+144.920127150" watchObservedRunningTime="2025-11-25 14:56:26.174886789 +0000 UTC m=+144.922021219" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.228715 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-26nlv" podStartSLOduration=119.228693108 podStartE2EDuration="1m59.228693108s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:26.219290343 +0000 UTC m=+144.966424763" watchObservedRunningTime="2025-11-25 14:56:26.228693108 +0000 UTC m=+144.975827538" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.238048 4731 patch_prober.go:28] interesting pod/router-default-5444994796-sclpc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:56:26 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 25 14:56:26 crc kubenswrapper[4731]: [+]process-running ok Nov 25 14:56:26 crc kubenswrapper[4731]: healthz check failed Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.238119 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sclpc" podUID="d81b8c12-b875-47a7-b2ee-c8cac5539c87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.250644 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:26 crc kubenswrapper[4731]: E1125 14:56:26.251123 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:26.75110218 +0000 UTC m=+145.498236610 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.352989 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:26 crc kubenswrapper[4731]: E1125 14:56:26.353683 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:26.853654655 +0000 UTC m=+145.600789275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.389321 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-vpcm5" podStartSLOduration=119.389304012 podStartE2EDuration="1m59.389304012s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:26.388779598 +0000 UTC m=+145.135914028" watchObservedRunningTime="2025-11-25 14:56:26.389304012 +0000 UTC m=+145.136438442" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.391648 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4qb6n" podStartSLOduration=120.391641482 podStartE2EDuration="2m0.391641482s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:26.314103947 +0000 UTC m=+145.061238377" watchObservedRunningTime="2025-11-25 14:56:26.391641482 +0000 UTC m=+145.138775912" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.454102 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:26 crc kubenswrapper[4731]: E1125 14:56:26.454276 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:26.954244449 +0000 UTC m=+145.701378879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.454353 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:26 crc kubenswrapper[4731]: E1125 14:56:26.455026 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:26.955017129 +0000 UTC m=+145.702151559 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.509510 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz" podStartSLOduration=119.509489315 podStartE2EDuration="1m59.509489315s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:26.508444768 +0000 UTC m=+145.255579198" watchObservedRunningTime="2025-11-25 14:56:26.509489315 +0000 UTC m=+145.256623745" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.556129 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:26 crc kubenswrapper[4731]: E1125 14:56:26.556344 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:27.056311312 +0000 UTC m=+145.803445742 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.556472 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:26 crc kubenswrapper[4731]: E1125 14:56:26.556877 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:27.056868966 +0000 UTC m=+145.804003386 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.565060 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-d9dwj" podStartSLOduration=119.565040949 podStartE2EDuration="1m59.565040949s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:26.549493255 +0000 UTC m=+145.296627695" watchObservedRunningTime="2025-11-25 14:56:26.565040949 +0000 UTC m=+145.312175379" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.599252 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-nrh95" podStartSLOduration=120.599234847 podStartE2EDuration="2m0.599234847s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:26.59664681 +0000 UTC m=+145.343781260" watchObservedRunningTime="2025-11-25 14:56:26.599234847 +0000 UTC m=+145.346369277" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.638937 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-bqsdf" podStartSLOduration=8.638918408 podStartE2EDuration="8.638918408s" podCreationTimestamp="2025-11-25 14:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:26.637937163 +0000 UTC m=+145.385071593" watchObservedRunningTime="2025-11-25 14:56:26.638918408 +0000 UTC m=+145.386052838" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.657355 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:26 crc kubenswrapper[4731]: E1125 14:56:26.657559 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:27.157523322 +0000 UTC m=+145.904657752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.658064 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:26 crc kubenswrapper[4731]: E1125 14:56:26.658713 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:27.158684762 +0000 UTC m=+145.905819382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.693486 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-5jpdg" podStartSLOduration=119.693462856 podStartE2EDuration="1m59.693462856s" podCreationTimestamp="2025-11-25 14:54:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:26.693070186 +0000 UTC m=+145.440204616" watchObservedRunningTime="2025-11-25 14:56:26.693462856 +0000 UTC m=+145.440597276" Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.759400 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:26 crc kubenswrapper[4731]: E1125 14:56:26.759690 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:27.259650816 +0000 UTC m=+146.006785256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.760113 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:26 crc kubenswrapper[4731]: E1125 14:56:26.760624 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:27.260603721 +0000 UTC m=+146.007738151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.861690 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:26 crc kubenswrapper[4731]: E1125 14:56:26.861929 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:27.361898423 +0000 UTC m=+146.109032853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.862481 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:26 crc kubenswrapper[4731]: E1125 14:56:26.862872 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:27.362862698 +0000 UTC m=+146.109997128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.963803 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:26 crc kubenswrapper[4731]: E1125 14:56:26.964011 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:27.463976806 +0000 UTC m=+146.211111226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.964232 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:26 crc kubenswrapper[4731]: E1125 14:56:26.964666 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:27.464650423 +0000 UTC m=+146.211784853 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.991321 4731 patch_prober.go:28] interesting pod/router-default-5444994796-sclpc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:56:26 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 25 14:56:26 crc kubenswrapper[4731]: [+]process-running ok Nov 25 14:56:26 crc kubenswrapper[4731]: healthz check failed Nov 25 14:56:26 crc kubenswrapper[4731]: I1125 14:56:26.991409 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sclpc" podUID="d81b8c12-b875-47a7-b2ee-c8cac5539c87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.013210 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.014294 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.016022 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.018072 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.028559 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.028724 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-bspt9" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.030366 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.030420 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.051921 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qmks6" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.053283 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-tkwxv" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.065983 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.066761 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0ce670c-0009-43aa-a6d2-ba22fb8b9adb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e0ce670c-0009-43aa-a6d2-ba22fb8b9adb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.067166 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0ce670c-0009-43aa-a6d2-ba22fb8b9adb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e0ce670c-0009-43aa-a6d2-ba22fb8b9adb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:56:27 crc kubenswrapper[4731]: E1125 14:56:27.068301 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:27.568282406 +0000 UTC m=+146.315416836 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.169179 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0ce670c-0009-43aa-a6d2-ba22fb8b9adb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e0ce670c-0009-43aa-a6d2-ba22fb8b9adb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.169299 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.169361 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0ce670c-0009-43aa-a6d2-ba22fb8b9adb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e0ce670c-0009-43aa-a6d2-ba22fb8b9adb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.169387 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0ce670c-0009-43aa-a6d2-ba22fb8b9adb-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"e0ce670c-0009-43aa-a6d2-ba22fb8b9adb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:56:27 crc kubenswrapper[4731]: E1125 14:56:27.169859 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:27.669837205 +0000 UTC m=+146.416971635 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.181657 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-n47rp" podStartSLOduration=121.181631711 podStartE2EDuration="2m1.181631711s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:27.144411204 +0000 UTC m=+145.891545634" watchObservedRunningTime="2025-11-25 14:56:27.181631711 +0000 UTC m=+145.928766131" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.227565 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-bspt9" podStartSLOduration=9.227545004 podStartE2EDuration="9.227545004s" podCreationTimestamp="2025-11-25 14:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:27.183358956 +0000 UTC m=+145.930493406" watchObservedRunningTime="2025-11-25 14:56:27.227545004 +0000 UTC m=+145.974679424" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.235323 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0ce670c-0009-43aa-a6d2-ba22fb8b9adb-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"e0ce670c-0009-43aa-a6d2-ba22fb8b9adb\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.270801 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:27 crc kubenswrapper[4731]: E1125 14:56:27.271005 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:27.770976303 +0000 UTC m=+146.518110733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.271117 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:27 crc kubenswrapper[4731]: E1125 14:56:27.271474 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:27.771464496 +0000 UTC m=+146.518598926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.329809 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.372134 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:27 crc kubenswrapper[4731]: E1125 14:56:27.372526 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:27.872508602 +0000 UTC m=+146.619643022 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.473730 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:27 crc kubenswrapper[4731]: E1125 14:56:27.474346 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:27.974319087 +0000 UTC m=+146.721453517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.576250 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:27 crc kubenswrapper[4731]: E1125 14:56:27.577192 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:28.07717102 +0000 UTC m=+146.824305440 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.685121 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:27 crc kubenswrapper[4731]: E1125 14:56:27.685545 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:28.185529046 +0000 UTC m=+146.932663476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.788142 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:27 crc kubenswrapper[4731]: E1125 14:56:27.788258 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:28.288236015 +0000 UTC m=+147.035370445 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.788614 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:27 crc kubenswrapper[4731]: E1125 14:56:27.788977 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:28.288968504 +0000 UTC m=+147.036102934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.891619 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:27 crc kubenswrapper[4731]: E1125 14:56:27.892113 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:28.392095504 +0000 UTC m=+147.139229934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.988142 4731 patch_prober.go:28] interesting pod/router-default-5444994796-sclpc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:56:27 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 25 14:56:27 crc kubenswrapper[4731]: [+]process-running ok Nov 25 14:56:27 crc kubenswrapper[4731]: healthz check failed Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.988210 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sclpc" podUID="d81b8c12-b875-47a7-b2ee-c8cac5539c87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:56:27 crc kubenswrapper[4731]: I1125 14:56:27.993604 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:27 crc kubenswrapper[4731]: E1125 14:56:27.994037 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:28.494021393 +0000 UTC m=+147.241155823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.094477 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:28 crc kubenswrapper[4731]: E1125 14:56:28.095718 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:28.595701266 +0000 UTC m=+147.342835696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.189291 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.196116 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:28 crc kubenswrapper[4731]: E1125 14:56:28.196508 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:28.696495045 +0000 UTC m=+147.443629475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:28 crc kubenswrapper[4731]: W1125 14:56:28.217496 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode0ce670c_0009_43aa_a6d2_ba22fb8b9adb.slice/crio-4a7fb9bdac059fab1bf4056971a83261e85f042aadb37c35002771b6a0eb947f WatchSource:0}: Error finding container 4a7fb9bdac059fab1bf4056971a83261e85f042aadb37c35002771b6a0eb947f: Status 404 returned error can't find the container with id 4a7fb9bdac059fab1bf4056971a83261e85f042aadb37c35002771b6a0eb947f Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.297187 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:28 crc kubenswrapper[4731]: E1125 14:56:28.297763 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:28.797747386 +0000 UTC m=+147.544881816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.395198 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zjpwq"] Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.396373 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.401450 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:28 crc kubenswrapper[4731]: E1125 14:56:28.402498 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:28.902459717 +0000 UTC m=+147.649594157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.402993 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.410857 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zjpwq"] Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.503361 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.503536 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c945221-c3ce-443a-82c2-5eb45a00f65c-catalog-content\") pod \"community-operators-zjpwq\" (UID: \"4c945221-c3ce-443a-82c2-5eb45a00f65c\") " pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.503585 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c945221-c3ce-443a-82c2-5eb45a00f65c-utilities\") pod \"community-operators-zjpwq\" (UID: \"4c945221-c3ce-443a-82c2-5eb45a00f65c\") " pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.503618 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-484x8\" (UniqueName: \"kubernetes.io/projected/4c945221-c3ce-443a-82c2-5eb45a00f65c-kube-api-access-484x8\") pod \"community-operators-zjpwq\" (UID: \"4c945221-c3ce-443a-82c2-5eb45a00f65c\") " pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:56:28 crc kubenswrapper[4731]: E1125 14:56:28.503791 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:29.00377254 +0000 UTC m=+147.750906970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.594555 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q9p29"] Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.595875 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.604758 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.604907 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c945221-c3ce-443a-82c2-5eb45a00f65c-catalog-content\") pod \"community-operators-zjpwq\" (UID: \"4c945221-c3ce-443a-82c2-5eb45a00f65c\") " pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.604957 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c945221-c3ce-443a-82c2-5eb45a00f65c-utilities\") pod \"community-operators-zjpwq\" (UID: \"4c945221-c3ce-443a-82c2-5eb45a00f65c\") " pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.605002 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-484x8\" (UniqueName: \"kubernetes.io/projected/4c945221-c3ce-443a-82c2-5eb45a00f65c-kube-api-access-484x8\") pod \"community-operators-zjpwq\" (UID: \"4c945221-c3ce-443a-82c2-5eb45a00f65c\") " pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.605635 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c945221-c3ce-443a-82c2-5eb45a00f65c-catalog-content\") pod \"community-operators-zjpwq\" (UID: \"4c945221-c3ce-443a-82c2-5eb45a00f65c\") " pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.605788 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c945221-c3ce-443a-82c2-5eb45a00f65c-utilities\") pod \"community-operators-zjpwq\" (UID: \"4c945221-c3ce-443a-82c2-5eb45a00f65c\") " pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:56:28 crc kubenswrapper[4731]: E1125 14:56:28.605975 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:29.105954156 +0000 UTC m=+147.853088586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.611490 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.630831 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q9p29"] Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.643444 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-484x8\" (UniqueName: \"kubernetes.io/projected/4c945221-c3ce-443a-82c2-5eb45a00f65c-kube-api-access-484x8\") pod \"community-operators-zjpwq\" (UID: \"4c945221-c3ce-443a-82c2-5eb45a00f65c\") " pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.706320 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.706605 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b750a03d-5b79-4d0c-b569-674089d31556-catalog-content\") pod \"certified-operators-q9p29\" (UID: \"b750a03d-5b79-4d0c-b569-674089d31556\") " pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.706649 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b750a03d-5b79-4d0c-b569-674089d31556-utilities\") pod \"certified-operators-q9p29\" (UID: \"b750a03d-5b79-4d0c-b569-674089d31556\") " pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.706670 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c27k8\" (UniqueName: \"kubernetes.io/projected/b750a03d-5b79-4d0c-b569-674089d31556-kube-api-access-c27k8\") pod \"certified-operators-q9p29\" (UID: \"b750a03d-5b79-4d0c-b569-674089d31556\") " pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:56:28 crc kubenswrapper[4731]: E1125 14:56:28.706849 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:29.206830437 +0000 UTC m=+147.953964867 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.779465 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vp8cm"] Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.780698 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.790522 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.809251 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b750a03d-5b79-4d0c-b569-674089d31556-catalog-content\") pod \"certified-operators-q9p29\" (UID: \"b750a03d-5b79-4d0c-b569-674089d31556\") " pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.809517 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b750a03d-5b79-4d0c-b569-674089d31556-utilities\") pod \"certified-operators-q9p29\" (UID: \"b750a03d-5b79-4d0c-b569-674089d31556\") " pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.809612 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c27k8\" (UniqueName: \"kubernetes.io/projected/b750a03d-5b79-4d0c-b569-674089d31556-kube-api-access-c27k8\") pod \"certified-operators-q9p29\" (UID: \"b750a03d-5b79-4d0c-b569-674089d31556\") " pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.809704 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-catalog-content\") pod \"community-operators-vp8cm\" (UID: \"c1b339ba-eeea-496d-a2b4-e5b96f4f2380\") " pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.809826 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-utilities\") pod \"community-operators-vp8cm\" (UID: \"c1b339ba-eeea-496d-a2b4-e5b96f4f2380\") " pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.809931 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twc9g\" (UniqueName: \"kubernetes.io/projected/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-kube-api-access-twc9g\") pod \"community-operators-vp8cm\" (UID: \"c1b339ba-eeea-496d-a2b4-e5b96f4f2380\") " pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.810048 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.810805 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b750a03d-5b79-4d0c-b569-674089d31556-catalog-content\") pod \"certified-operators-q9p29\" (UID: \"b750a03d-5b79-4d0c-b569-674089d31556\") " pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.811322 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b750a03d-5b79-4d0c-b569-674089d31556-utilities\") pod \"certified-operators-q9p29\" (UID: \"b750a03d-5b79-4d0c-b569-674089d31556\") " pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:56:28 crc kubenswrapper[4731]: E1125 14:56:28.814482 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:29.314452904 +0000 UTC m=+148.061587334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.902029 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c27k8\" (UniqueName: \"kubernetes.io/projected/b750a03d-5b79-4d0c-b569-674089d31556-kube-api-access-c27k8\") pod \"certified-operators-q9p29\" (UID: \"b750a03d-5b79-4d0c-b569-674089d31556\") " pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.909336 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.910505 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:28 crc kubenswrapper[4731]: E1125 14:56:28.911206 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:29.411186758 +0000 UTC m=+148.158321188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.911401 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.911553 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-catalog-content\") pod \"community-operators-vp8cm\" (UID: \"c1b339ba-eeea-496d-a2b4-e5b96f4f2380\") " pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.911630 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-utilities\") pod \"community-operators-vp8cm\" (UID: \"c1b339ba-eeea-496d-a2b4-e5b96f4f2380\") " pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.911723 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twc9g\" (UniqueName: \"kubernetes.io/projected/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-kube-api-access-twc9g\") pod \"community-operators-vp8cm\" (UID: \"c1b339ba-eeea-496d-a2b4-e5b96f4f2380\") " pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:56:28 crc kubenswrapper[4731]: E1125 14:56:28.912399 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:29.412390009 +0000 UTC m=+148.159524439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.913143 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-utilities\") pod \"community-operators-vp8cm\" (UID: \"c1b339ba-eeea-496d-a2b4-e5b96f4f2380\") " pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.913492 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-catalog-content\") pod \"community-operators-vp8cm\" (UID: \"c1b339ba-eeea-496d-a2b4-e5b96f4f2380\") " pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.919363 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vp8cm"] Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.980928 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cr25s"] Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.982146 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.988122 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twc9g\" (UniqueName: \"kubernetes.io/projected/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-kube-api-access-twc9g\") pod \"community-operators-vp8cm\" (UID: \"c1b339ba-eeea-496d-a2b4-e5b96f4f2380\") " pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.997379 4731 patch_prober.go:28] interesting pod/router-default-5444994796-sclpc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:56:28 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 25 14:56:28 crc kubenswrapper[4731]: [+]process-running ok Nov 25 14:56:28 crc kubenswrapper[4731]: healthz check failed Nov 25 14:56:28 crc kubenswrapper[4731]: I1125 14:56:28.997465 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sclpc" podUID="d81b8c12-b875-47a7-b2ee-c8cac5539c87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.012514 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.012772 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/525c51bb-0303-4b1b-aeb9-593fb77df6af-catalog-content\") pod \"certified-operators-cr25s\" (UID: \"525c51bb-0303-4b1b-aeb9-593fb77df6af\") " pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.012804 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9lvs\" (UniqueName: \"kubernetes.io/projected/525c51bb-0303-4b1b-aeb9-593fb77df6af-kube-api-access-m9lvs\") pod \"certified-operators-cr25s\" (UID: \"525c51bb-0303-4b1b-aeb9-593fb77df6af\") " pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.012942 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/525c51bb-0303-4b1b-aeb9-593fb77df6af-utilities\") pod \"certified-operators-cr25s\" (UID: \"525c51bb-0303-4b1b-aeb9-593fb77df6af\") " pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:56:29 crc kubenswrapper[4731]: E1125 14:56:29.013070 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:29.513053765 +0000 UTC m=+148.260188185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.024222 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cr25s"] Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.073303 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e0ce670c-0009-43aa-a6d2-ba22fb8b9adb","Type":"ContainerStarted","Data":"28676ed5a3bd6a28f782b10b2fa1ece8463ced63bd2eec11fda1588dbb96e5fc"} Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.073386 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e0ce670c-0009-43aa-a6d2-ba22fb8b9adb","Type":"ContainerStarted","Data":"4a7fb9bdac059fab1bf4056971a83261e85f042aadb37c35002771b6a0eb947f"} Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.083138 4731 generic.go:334] "Generic (PLEG): container finished" podID="8380637d-e32d-4018-8081-28b3076fad29" containerID="3f29d25ea98a8a078e02e86fe0303d3b903e873ea82880d9c7a7a06d31824d4e" exitCode=0 Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.083196 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" event={"ID":"8380637d-e32d-4018-8081-28b3076fad29","Type":"ContainerDied","Data":"3f29d25ea98a8a078e02e86fe0303d3b903e873ea82880d9c7a7a06d31824d4e"} Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.098579 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.114210 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/525c51bb-0303-4b1b-aeb9-593fb77df6af-utilities\") pod \"certified-operators-cr25s\" (UID: \"525c51bb-0303-4b1b-aeb9-593fb77df6af\") " pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.114271 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/525c51bb-0303-4b1b-aeb9-593fb77df6af-catalog-content\") pod \"certified-operators-cr25s\" (UID: \"525c51bb-0303-4b1b-aeb9-593fb77df6af\") " pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.114294 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9lvs\" (UniqueName: \"kubernetes.io/projected/525c51bb-0303-4b1b-aeb9-593fb77df6af-kube-api-access-m9lvs\") pod \"certified-operators-cr25s\" (UID: \"525c51bb-0303-4b1b-aeb9-593fb77df6af\") " pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.114336 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:29 crc kubenswrapper[4731]: E1125 14:56:29.114775 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:29.614758778 +0000 UTC m=+148.361893208 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.115639 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/525c51bb-0303-4b1b-aeb9-593fb77df6af-catalog-content\") pod \"certified-operators-cr25s\" (UID: \"525c51bb-0303-4b1b-aeb9-593fb77df6af\") " pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.126077 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/525c51bb-0303-4b1b-aeb9-593fb77df6af-utilities\") pod \"certified-operators-cr25s\" (UID: \"525c51bb-0303-4b1b-aeb9-593fb77df6af\") " pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.152704 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9lvs\" (UniqueName: \"kubernetes.io/projected/525c51bb-0303-4b1b-aeb9-593fb77df6af-kube-api-access-m9lvs\") pod \"certified-operators-cr25s\" (UID: \"525c51bb-0303-4b1b-aeb9-593fb77df6af\") " pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.217036 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:29 crc kubenswrapper[4731]: E1125 14:56:29.217433 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:29.717415266 +0000 UTC m=+148.464549696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.319485 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:29 crc kubenswrapper[4731]: E1125 14:56:29.320299 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:29.820285369 +0000 UTC m=+148.567419799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.337376 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.421529 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:29 crc kubenswrapper[4731]: E1125 14:56:29.421881 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:29.921859239 +0000 UTC m=+148.668993669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.522967 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:29 crc kubenswrapper[4731]: E1125 14:56:29.523369 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:30.023355967 +0000 UTC m=+148.770490397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.589136 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zjpwq"] Nov 25 14:56:29 crc kubenswrapper[4731]: W1125 14:56:29.605912 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c945221_c3ce_443a_82c2_5eb45a00f65c.slice/crio-25943d9fff31d891bb04f7ed571beeb4cd86769d02960798aea5c34c65bff4c9 WatchSource:0}: Error finding container 25943d9fff31d891bb04f7ed571beeb4cd86769d02960798aea5c34c65bff4c9: Status 404 returned error can't find the container with id 25943d9fff31d891bb04f7ed571beeb4cd86769d02960798aea5c34c65bff4c9 Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.624213 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:29 crc kubenswrapper[4731]: E1125 14:56:29.624752 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:30.124732941 +0000 UTC m=+148.871867371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.725669 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:29 crc kubenswrapper[4731]: E1125 14:56:29.726441 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:30.226426774 +0000 UTC m=+148.973561204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.825842 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q9p29"] Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.827078 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:29 crc kubenswrapper[4731]: E1125 14:56:29.827324 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:30.327298815 +0000 UTC m=+149.074433245 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.827561 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:29 crc kubenswrapper[4731]: E1125 14:56:29.828109 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:30.328095656 +0000 UTC m=+149.075230086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.931677 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:29 crc kubenswrapper[4731]: E1125 14:56:29.931971 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:30.431931495 +0000 UTC m=+149.179065935 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.932252 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:29 crc kubenswrapper[4731]: E1125 14:56:29.932861 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:30.432808597 +0000 UTC m=+149.179943187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.986058 4731 patch_prober.go:28] interesting pod/router-default-5444994796-sclpc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:56:29 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 25 14:56:29 crc kubenswrapper[4731]: [+]process-running ok Nov 25 14:56:29 crc kubenswrapper[4731]: healthz check failed Nov 25 14:56:29 crc kubenswrapper[4731]: I1125 14:56:29.986284 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sclpc" podUID="d81b8c12-b875-47a7-b2ee-c8cac5539c87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.002171 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cr25s"] Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.033319 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:30 crc kubenswrapper[4731]: E1125 14:56:30.033792 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:30.533775541 +0000 UTC m=+149.280909971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.041576 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qtqcb" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.108126 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9p29" event={"ID":"b750a03d-5b79-4d0c-b569-674089d31556","Type":"ContainerStarted","Data":"8fc0cc25c398979f9c0ed43f245215d2ba72c3d3c925e5f682ce89f46b7823d0"} Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.108185 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9p29" event={"ID":"b750a03d-5b79-4d0c-b569-674089d31556","Type":"ContainerStarted","Data":"084981be65b1653d62a97bd8ff952e3ed317474b066cdab2f42134e397ead574"} Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.108623 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vp8cm"] Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.114223 4731 generic.go:334] "Generic (PLEG): container finished" podID="e0ce670c-0009-43aa-a6d2-ba22fb8b9adb" containerID="28676ed5a3bd6a28f782b10b2fa1ece8463ced63bd2eec11fda1588dbb96e5fc" exitCode=0 Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.114293 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e0ce670c-0009-43aa-a6d2-ba22fb8b9adb","Type":"ContainerDied","Data":"28676ed5a3bd6a28f782b10b2fa1ece8463ced63bd2eec11fda1588dbb96e5fc"} Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.120000 4731 generic.go:334] "Generic (PLEG): container finished" podID="4c945221-c3ce-443a-82c2-5eb45a00f65c" containerID="d2e01289a5b881eef3a3544aef10b194b6ba604d8cc30fb9276468706d6eea51" exitCode=0 Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.120106 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zjpwq" event={"ID":"4c945221-c3ce-443a-82c2-5eb45a00f65c","Type":"ContainerDied","Data":"d2e01289a5b881eef3a3544aef10b194b6ba604d8cc30fb9276468706d6eea51"} Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.120138 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zjpwq" event={"ID":"4c945221-c3ce-443a-82c2-5eb45a00f65c","Type":"ContainerStarted","Data":"25943d9fff31d891bb04f7ed571beeb4cd86769d02960798aea5c34c65bff4c9"} Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.122406 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.128796 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" event={"ID":"a160b766-12c1-4549-b892-794fe9009a9e","Type":"ContainerStarted","Data":"cae1dab820c640ad8a81b854d74061b7e08814d12886d530bb81225d88d9da44"} Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.135695 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:30 crc kubenswrapper[4731]: E1125 14:56:30.136061 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:30.636044549 +0000 UTC m=+149.383178979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.137141 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cr25s" event={"ID":"525c51bb-0303-4b1b-aeb9-593fb77df6af","Type":"ContainerStarted","Data":"e0215c87bac03b97c83658f93931bdbae4a234772cf28b22cfa5fc2b431b909b"} Nov 25 14:56:30 crc kubenswrapper[4731]: W1125 14:56:30.182421 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1b339ba_eeea_496d_a2b4_e5b96f4f2380.slice/crio-ce7ce700f7870b59c22a50486395c84253f36b1a696ad11612a25b6e4d575357 WatchSource:0}: Error finding container ce7ce700f7870b59c22a50486395c84253f36b1a696ad11612a25b6e4d575357: Status 404 returned error can't find the container with id ce7ce700f7870b59c22a50486395c84253f36b1a696ad11612a25b6e4d575357 Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.237034 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:30 crc kubenswrapper[4731]: E1125 14:56:30.237685 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:30.737645429 +0000 UTC m=+149.484779859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.339747 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:30 crc kubenswrapper[4731]: E1125 14:56:30.340293 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:30.840263466 +0000 UTC m=+149.587397936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.364346 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qj7cc"] Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.365638 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.365690 4731 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.368314 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.378571 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qj7cc"] Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.398986 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.440772 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.440959 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8380637d-e32d-4018-8081-28b3076fad29-config-volume\") pod \"8380637d-e32d-4018-8081-28b3076fad29\" (UID: \"8380637d-e32d-4018-8081-28b3076fad29\") " Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.440992 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sp55p\" (UniqueName: \"kubernetes.io/projected/8380637d-e32d-4018-8081-28b3076fad29-kube-api-access-sp55p\") pod \"8380637d-e32d-4018-8081-28b3076fad29\" (UID: \"8380637d-e32d-4018-8081-28b3076fad29\") " Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.441015 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8380637d-e32d-4018-8081-28b3076fad29-secret-volume\") pod \"8380637d-e32d-4018-8081-28b3076fad29\" (UID: \"8380637d-e32d-4018-8081-28b3076fad29\") " Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.441202 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0d6998-e333-4b4e-93c3-d2d346675f07-utilities\") pod \"redhat-marketplace-qj7cc\" (UID: \"9d0d6998-e333-4b4e-93c3-d2d346675f07\") " pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.441230 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0d6998-e333-4b4e-93c3-d2d346675f07-catalog-content\") pod \"redhat-marketplace-qj7cc\" (UID: \"9d0d6998-e333-4b4e-93c3-d2d346675f07\") " pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.441263 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qszvq\" (UniqueName: \"kubernetes.io/projected/9d0d6998-e333-4b4e-93c3-d2d346675f07-kube-api-access-qszvq\") pod \"redhat-marketplace-qj7cc\" (UID: \"9d0d6998-e333-4b4e-93c3-d2d346675f07\") " pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:56:30 crc kubenswrapper[4731]: E1125 14:56:30.441665 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:30.9416247 +0000 UTC m=+149.688759130 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.442371 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8380637d-e32d-4018-8081-28b3076fad29-config-volume" (OuterVolumeSpecName: "config-volume") pod "8380637d-e32d-4018-8081-28b3076fad29" (UID: "8380637d-e32d-4018-8081-28b3076fad29"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.454948 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8380637d-e32d-4018-8081-28b3076fad29-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8380637d-e32d-4018-8081-28b3076fad29" (UID: "8380637d-e32d-4018-8081-28b3076fad29"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.470761 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8380637d-e32d-4018-8081-28b3076fad29-kube-api-access-sp55p" (OuterVolumeSpecName: "kube-api-access-sp55p") pod "8380637d-e32d-4018-8081-28b3076fad29" (UID: "8380637d-e32d-4018-8081-28b3076fad29"). InnerVolumeSpecName "kube-api-access-sp55p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.518687 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.518756 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.545780 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0d6998-e333-4b4e-93c3-d2d346675f07-utilities\") pod \"redhat-marketplace-qj7cc\" (UID: \"9d0d6998-e333-4b4e-93c3-d2d346675f07\") " pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.545864 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0d6998-e333-4b4e-93c3-d2d346675f07-catalog-content\") pod \"redhat-marketplace-qj7cc\" (UID: \"9d0d6998-e333-4b4e-93c3-d2d346675f07\") " pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.545922 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qszvq\" (UniqueName: \"kubernetes.io/projected/9d0d6998-e333-4b4e-93c3-d2d346675f07-kube-api-access-qszvq\") pod \"redhat-marketplace-qj7cc\" (UID: \"9d0d6998-e333-4b4e-93c3-d2d346675f07\") " pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.546002 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.546060 4731 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8380637d-e32d-4018-8081-28b3076fad29-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.546071 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sp55p\" (UniqueName: \"kubernetes.io/projected/8380637d-e32d-4018-8081-28b3076fad29-kube-api-access-sp55p\") on node \"crc\" DevicePath \"\"" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.546082 4731 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8380637d-e32d-4018-8081-28b3076fad29-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 14:56:30 crc kubenswrapper[4731]: E1125 14:56:30.546421 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:31.046406723 +0000 UTC m=+149.793541153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.546739 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0d6998-e333-4b4e-93c3-d2d346675f07-catalog-content\") pod \"redhat-marketplace-qj7cc\" (UID: \"9d0d6998-e333-4b4e-93c3-d2d346675f07\") " pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.547407 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0d6998-e333-4b4e-93c3-d2d346675f07-utilities\") pod \"redhat-marketplace-qj7cc\" (UID: \"9d0d6998-e333-4b4e-93c3-d2d346675f07\") " pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.566086 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qszvq\" (UniqueName: \"kubernetes.io/projected/9d0d6998-e333-4b4e-93c3-d2d346675f07-kube-api-access-qszvq\") pod \"redhat-marketplace-qj7cc\" (UID: \"9d0d6998-e333-4b4e-93c3-d2d346675f07\") " pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.645009 4731 patch_prober.go:28] interesting pod/apiserver-76f77b778f-n47rp container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Nov 25 14:56:30 crc kubenswrapper[4731]: [+]log ok Nov 25 14:56:30 crc kubenswrapper[4731]: [+]etcd ok Nov 25 14:56:30 crc kubenswrapper[4731]: [+]poststarthook/start-apiserver-admission-initializer ok Nov 25 14:56:30 crc kubenswrapper[4731]: [+]poststarthook/generic-apiserver-start-informers ok Nov 25 14:56:30 crc kubenswrapper[4731]: [+]poststarthook/max-in-flight-filter ok Nov 25 14:56:30 crc kubenswrapper[4731]: [+]poststarthook/storage-object-count-tracker-hook ok Nov 25 14:56:30 crc kubenswrapper[4731]: [+]poststarthook/image.openshift.io-apiserver-caches ok Nov 25 14:56:30 crc kubenswrapper[4731]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Nov 25 14:56:30 crc kubenswrapper[4731]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Nov 25 14:56:30 crc kubenswrapper[4731]: [+]poststarthook/project.openshift.io-projectcache ok Nov 25 14:56:30 crc kubenswrapper[4731]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Nov 25 14:56:30 crc kubenswrapper[4731]: [+]poststarthook/openshift.io-startinformers ok Nov 25 14:56:30 crc kubenswrapper[4731]: [+]poststarthook/openshift.io-restmapperupdater ok Nov 25 14:56:30 crc kubenswrapper[4731]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Nov 25 14:56:30 crc kubenswrapper[4731]: livez check failed Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.645108 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-n47rp" podUID="a0daa5b0-809c-4237-bd24-b1eabc5a4a0e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.647728 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:30 crc kubenswrapper[4731]: E1125 14:56:30.648378 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:31.148358802 +0000 UTC m=+149.895493232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.749229 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.751206 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:30 crc kubenswrapper[4731]: E1125 14:56:30.751672 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:31.251650976 +0000 UTC m=+149.998785416 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.773036 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mfskb"] Nov 25 14:56:30 crc kubenswrapper[4731]: E1125 14:56:30.773295 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8380637d-e32d-4018-8081-28b3076fad29" containerName="collect-profiles" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.773308 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8380637d-e32d-4018-8081-28b3076fad29" containerName="collect-profiles" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.773417 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="8380637d-e32d-4018-8081-28b3076fad29" containerName="collect-profiles" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.774467 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.791701 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfskb"] Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.851774 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.852152 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrffz\" (UniqueName: \"kubernetes.io/projected/a55f5a5e-8ee3-461e-81b3-78c56a873782-kube-api-access-jrffz\") pod \"redhat-marketplace-mfskb\" (UID: \"a55f5a5e-8ee3-461e-81b3-78c56a873782\") " pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.852197 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55f5a5e-8ee3-461e-81b3-78c56a873782-catalog-content\") pod \"redhat-marketplace-mfskb\" (UID: \"a55f5a5e-8ee3-461e-81b3-78c56a873782\") " pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.852302 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55f5a5e-8ee3-461e-81b3-78c56a873782-utilities\") pod \"redhat-marketplace-mfskb\" (UID: \"a55f5a5e-8ee3-461e-81b3-78c56a873782\") " pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:56:30 crc kubenswrapper[4731]: E1125 14:56:30.852434 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:31.352410764 +0000 UTC m=+150.099545194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.880226 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.880266 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.885438 4731 patch_prober.go:28] interesting pod/console-f9d7485db-dhtk5 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.885539 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dhtk5" podUID="5a457d06-8bf4-4077-a3e9-d25ff971832b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.10:8443/health\": dial tcp 10.217.0.10:8443: connect: connection refused" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.953282 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.953737 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55f5a5e-8ee3-461e-81b3-78c56a873782-utilities\") pod \"redhat-marketplace-mfskb\" (UID: \"a55f5a5e-8ee3-461e-81b3-78c56a873782\") " pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.953892 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrffz\" (UniqueName: \"kubernetes.io/projected/a55f5a5e-8ee3-461e-81b3-78c56a873782-kube-api-access-jrffz\") pod \"redhat-marketplace-mfskb\" (UID: \"a55f5a5e-8ee3-461e-81b3-78c56a873782\") " pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.953925 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55f5a5e-8ee3-461e-81b3-78c56a873782-catalog-content\") pod \"redhat-marketplace-mfskb\" (UID: \"a55f5a5e-8ee3-461e-81b3-78c56a873782\") " pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:56:30 crc kubenswrapper[4731]: E1125 14:56:30.954222 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:31.45419642 +0000 UTC m=+150.201331040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.955317 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55f5a5e-8ee3-461e-81b3-78c56a873782-utilities\") pod \"redhat-marketplace-mfskb\" (UID: \"a55f5a5e-8ee3-461e-81b3-78c56a873782\") " pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.955935 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55f5a5e-8ee3-461e-81b3-78c56a873782-catalog-content\") pod \"redhat-marketplace-mfskb\" (UID: \"a55f5a5e-8ee3-461e-81b3-78c56a873782\") " pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.978967 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrffz\" (UniqueName: \"kubernetes.io/projected/a55f5a5e-8ee3-461e-81b3-78c56a873782-kube-api-access-jrffz\") pod \"redhat-marketplace-mfskb\" (UID: \"a55f5a5e-8ee3-461e-81b3-78c56a873782\") " pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.985149 4731 patch_prober.go:28] interesting pod/router-default-5444994796-sclpc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:56:30 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 25 14:56:30 crc kubenswrapper[4731]: [+]process-running ok Nov 25 14:56:30 crc kubenswrapper[4731]: healthz check failed Nov 25 14:56:30 crc kubenswrapper[4731]: I1125 14:56:30.985219 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sclpc" podUID="d81b8c12-b875-47a7-b2ee-c8cac5539c87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.055181 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:31 crc kubenswrapper[4731]: E1125 14:56:31.055458 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-11-25 14:56:31.55542005 +0000 UTC m=+150.302554480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.055619 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:31 crc kubenswrapper[4731]: E1125 14:56:31.056076 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-11-25 14:56:31.556059917 +0000 UTC m=+150.303194347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-bbgwg" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.059739 4731 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-11-25T14:56:30.365710297Z","Handler":null,"Name":""} Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.066188 4731 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.066235 4731 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.092650 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.093049 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.093106 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.093191 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.093264 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.145561 4731 generic.go:334] "Generic (PLEG): container finished" podID="525c51bb-0303-4b1b-aeb9-593fb77df6af" containerID="853ec981ed68284335ed6ec37fa46086210dccc7ef8f7c0ed357b577e496b0b4" exitCode=0 Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.145704 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cr25s" event={"ID":"525c51bb-0303-4b1b-aeb9-593fb77df6af","Type":"ContainerDied","Data":"853ec981ed68284335ed6ec37fa46086210dccc7ef8f7c0ed357b577e496b0b4"} Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.148574 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" event={"ID":"8380637d-e32d-4018-8081-28b3076fad29","Type":"ContainerDied","Data":"cf59963683d10918fbadddbfd01e2272f2576bcac120673430d6d47a3c49a82d"} Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.148630 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf59963683d10918fbadddbfd01e2272f2576bcac120673430d6d47a3c49a82d" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.148663 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.150256 4731 generic.go:334] "Generic (PLEG): container finished" podID="b750a03d-5b79-4d0c-b569-674089d31556" containerID="8fc0cc25c398979f9c0ed43f245215d2ba72c3d3c925e5f682ce89f46b7823d0" exitCode=0 Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.150328 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9p29" event={"ID":"b750a03d-5b79-4d0c-b569-674089d31556","Type":"ContainerDied","Data":"8fc0cc25c398979f9c0ed43f245215d2ba72c3d3c925e5f682ce89f46b7823d0"} Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.154874 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.156381 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.157142 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" event={"ID":"a160b766-12c1-4549-b892-794fe9009a9e","Type":"ContainerStarted","Data":"0b2f25fa49d2c1b04112af074f3cf8a9d3454e08b8fbdfad00d8a5fb528efd52"} Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.157177 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" event={"ID":"a160b766-12c1-4549-b892-794fe9009a9e","Type":"ContainerStarted","Data":"8bb293b2a0cd2bf29ea4310816cfc4b200e2ade167143a85c0c944eee96e1beb"} Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.159182 4731 generic.go:334] "Generic (PLEG): container finished" podID="c1b339ba-eeea-496d-a2b4-e5b96f4f2380" containerID="c3bfd7f757e88bb9eaa0346eea5b6c57c346bfd3ee4bb473340a4e9a03a1f704" exitCode=0 Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.159299 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp8cm" event={"ID":"c1b339ba-eeea-496d-a2b4-e5b96f4f2380","Type":"ContainerDied","Data":"c3bfd7f757e88bb9eaa0346eea5b6c57c346bfd3ee4bb473340a4e9a03a1f704"} Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.159336 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp8cm" event={"ID":"c1b339ba-eeea-496d-a2b4-e5b96f4f2380","Type":"ContainerStarted","Data":"ce7ce700f7870b59c22a50486395c84253f36b1a696ad11612a25b6e4d575357"} Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.166785 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8q4qs" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.169407 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.206441 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qj7cc"] Nov 25 14:56:31 crc kubenswrapper[4731]: W1125 14:56:31.222980 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d0d6998_e333_4b4e_93c3_d2d346675f07.slice/crio-ee9598a7a6e9f4c4539a47bf34aa253c2f291d1534455df82e1913ef91109818 WatchSource:0}: Error finding container ee9598a7a6e9f4c4539a47bf34aa253c2f291d1534455df82e1913ef91109818: Status 404 returned error can't find the container with id ee9598a7a6e9f4c4539a47bf34aa253c2f291d1534455df82e1913ef91109818 Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.246017 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-cmr7l" podStartSLOduration=13.245983082 podStartE2EDuration="13.245983082s" podCreationTimestamp="2025-11-25 14:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:31.241460925 +0000 UTC m=+149.988595395" watchObservedRunningTime="2025-11-25 14:56:31.245983082 +0000 UTC m=+149.993117532" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.259214 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.272506 4731 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.272560 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.353967 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.354869 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.359428 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.361216 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.362537 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0766743e-eb60-4e84-8c7e-b9e57d984825-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0766743e-eb60-4e84-8c7e-b9e57d984825\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.362753 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0766743e-eb60-4e84-8c7e-b9e57d984825-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0766743e-eb60-4e84-8c7e-b9e57d984825\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.374940 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.437920 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.466712 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0766743e-eb60-4e84-8c7e-b9e57d984825-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0766743e-eb60-4e84-8c7e-b9e57d984825\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.466804 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0766743e-eb60-4e84-8c7e-b9e57d984825-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0766743e-eb60-4e84-8c7e-b9e57d984825\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.467891 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0766743e-eb60-4e84-8c7e-b9e57d984825-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0766743e-eb60-4e84-8c7e-b9e57d984825\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.506384 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0766743e-eb60-4e84-8c7e-b9e57d984825-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0766743e-eb60-4e84-8c7e-b9e57d984825\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.507280 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-bbgwg\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.552159 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.570739 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-md9zz"] Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.572409 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.578863 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.580831 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.585031 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-md9zz"] Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.624126 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfskb"] Nov 25 14:56:31 crc kubenswrapper[4731]: W1125 14:56:31.626894 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda55f5a5e_8ee3_461e_81b3_78c56a873782.slice/crio-adfae1c3f83b4dab762d17221943a8906c7e76f6a209ad12b42f4e4d834141cc WatchSource:0}: Error finding container adfae1c3f83b4dab762d17221943a8906c7e76f6a209ad12b42f4e4d834141cc: Status 404 returned error can't find the container with id adfae1c3f83b4dab762d17221943a8906c7e76f6a209ad12b42f4e4d834141cc Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.700165 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.771457 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0ce670c-0009-43aa-a6d2-ba22fb8b9adb-kube-api-access\") pod \"e0ce670c-0009-43aa-a6d2-ba22fb8b9adb\" (UID: \"e0ce670c-0009-43aa-a6d2-ba22fb8b9adb\") " Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.771567 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0ce670c-0009-43aa-a6d2-ba22fb8b9adb-kubelet-dir\") pod \"e0ce670c-0009-43aa-a6d2-ba22fb8b9adb\" (UID: \"e0ce670c-0009-43aa-a6d2-ba22fb8b9adb\") " Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.771810 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-utilities\") pod \"redhat-operators-md9zz\" (UID: \"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca\") " pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.771943 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-catalog-content\") pod \"redhat-operators-md9zz\" (UID: \"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca\") " pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.772040 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7qk7\" (UniqueName: \"kubernetes.io/projected/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-kube-api-access-r7qk7\") pod \"redhat-operators-md9zz\" (UID: \"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca\") " pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.772168 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0ce670c-0009-43aa-a6d2-ba22fb8b9adb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e0ce670c-0009-43aa-a6d2-ba22fb8b9adb" (UID: "e0ce670c-0009-43aa-a6d2-ba22fb8b9adb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.777378 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2w858" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.783465 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0ce670c-0009-43aa-a6d2-ba22fb8b9adb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e0ce670c-0009-43aa-a6d2-ba22fb8b9adb" (UID: "e0ce670c-0009-43aa-a6d2-ba22fb8b9adb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.831791 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bbgwg"] Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.873562 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7qk7\" (UniqueName: \"kubernetes.io/projected/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-kube-api-access-r7qk7\") pod \"redhat-operators-md9zz\" (UID: \"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca\") " pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.873639 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-utilities\") pod \"redhat-operators-md9zz\" (UID: \"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca\") " pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.873719 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-catalog-content\") pod \"redhat-operators-md9zz\" (UID: \"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca\") " pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.873807 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e0ce670c-0009-43aa-a6d2-ba22fb8b9adb-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.873848 4731 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e0ce670c-0009-43aa-a6d2-ba22fb8b9adb-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.874443 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-catalog-content\") pod \"redhat-operators-md9zz\" (UID: \"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca\") " pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.874607 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-utilities\") pod \"redhat-operators-md9zz\" (UID: \"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca\") " pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.892532 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7qk7\" (UniqueName: \"kubernetes.io/projected/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-kube-api-access-r7qk7\") pod \"redhat-operators-md9zz\" (UID: \"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca\") " pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.897793 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.977102 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5vj2x"] Nov 25 14:56:31 crc kubenswrapper[4731]: E1125 14:56:31.977990 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0ce670c-0009-43aa-a6d2-ba22fb8b9adb" containerName="pruner" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.978017 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0ce670c-0009-43aa-a6d2-ba22fb8b9adb" containerName="pruner" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.978191 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0ce670c-0009-43aa-a6d2-ba22fb8b9adb" containerName="pruner" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.984908 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5vj2x"] Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.984970 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.985253 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.993239 4731 patch_prober.go:28] interesting pod/router-default-5444994796-sclpc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:56:31 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 25 14:56:31 crc kubenswrapper[4731]: [+]process-running ok Nov 25 14:56:31 crc kubenswrapper[4731]: healthz check failed Nov 25 14:56:31 crc kubenswrapper[4731]: I1125 14:56:31.993313 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sclpc" podUID="d81b8c12-b875-47a7-b2ee-c8cac5539c87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.065706 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.082147 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f87rs\" (UniqueName: \"kubernetes.io/projected/161c00b9-b4f1-4c6b-9ee9-f4b056415601-kube-api-access-f87rs\") pod \"redhat-operators-5vj2x\" (UID: \"161c00b9-b4f1-4c6b-9ee9-f4b056415601\") " pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.082315 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/161c00b9-b4f1-4c6b-9ee9-f4b056415601-catalog-content\") pod \"redhat-operators-5vj2x\" (UID: \"161c00b9-b4f1-4c6b-9ee9-f4b056415601\") " pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.082345 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/161c00b9-b4f1-4c6b-9ee9-f4b056415601-utilities\") pod \"redhat-operators-5vj2x\" (UID: \"161c00b9-b4f1-4c6b-9ee9-f4b056415601\") " pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.172961 4731 generic.go:334] "Generic (PLEG): container finished" podID="9d0d6998-e333-4b4e-93c3-d2d346675f07" containerID="e8db32b3c79a106ac3ffa8aea64f6244436898db069a72d6bdfe20a49aef6a69" exitCode=0 Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.173032 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj7cc" event={"ID":"9d0d6998-e333-4b4e-93c3-d2d346675f07","Type":"ContainerDied","Data":"e8db32b3c79a106ac3ffa8aea64f6244436898db069a72d6bdfe20a49aef6a69"} Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.173071 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj7cc" event={"ID":"9d0d6998-e333-4b4e-93c3-d2d346675f07","Type":"ContainerStarted","Data":"ee9598a7a6e9f4c4539a47bf34aa253c2f291d1534455df82e1913ef91109818"} Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.183689 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f87rs\" (UniqueName: \"kubernetes.io/projected/161c00b9-b4f1-4c6b-9ee9-f4b056415601-kube-api-access-f87rs\") pod \"redhat-operators-5vj2x\" (UID: \"161c00b9-b4f1-4c6b-9ee9-f4b056415601\") " pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.183795 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/161c00b9-b4f1-4c6b-9ee9-f4b056415601-catalog-content\") pod \"redhat-operators-5vj2x\" (UID: \"161c00b9-b4f1-4c6b-9ee9-f4b056415601\") " pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.186824 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/161c00b9-b4f1-4c6b-9ee9-f4b056415601-catalog-content\") pod \"redhat-operators-5vj2x\" (UID: \"161c00b9-b4f1-4c6b-9ee9-f4b056415601\") " pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.188142 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/161c00b9-b4f1-4c6b-9ee9-f4b056415601-utilities\") pod \"redhat-operators-5vj2x\" (UID: \"161c00b9-b4f1-4c6b-9ee9-f4b056415601\") " pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.190036 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" event={"ID":"bb91ed12-623e-4b63-b24d-e47c36f7ef7b","Type":"ContainerStarted","Data":"9caa92c07f89b3556b46e2a20fb54223e670677f66c8d867e859baf605671505"} Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.188552 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/161c00b9-b4f1-4c6b-9ee9-f4b056415601-utilities\") pod \"redhat-operators-5vj2x\" (UID: \"161c00b9-b4f1-4c6b-9ee9-f4b056415601\") " pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.211279 4731 generic.go:334] "Generic (PLEG): container finished" podID="a55f5a5e-8ee3-461e-81b3-78c56a873782" containerID="a806bc7c78310bf6100510d6603cec924a90dd13e809e96d484f6436ef2c3db2" exitCode=0 Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.212703 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfskb" event={"ID":"a55f5a5e-8ee3-461e-81b3-78c56a873782","Type":"ContainerDied","Data":"a806bc7c78310bf6100510d6603cec924a90dd13e809e96d484f6436ef2c3db2"} Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.212868 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfskb" event={"ID":"a55f5a5e-8ee3-461e-81b3-78c56a873782","Type":"ContainerStarted","Data":"adfae1c3f83b4dab762d17221943a8906c7e76f6a209ad12b42f4e4d834141cc"} Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.253703 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f87rs\" (UniqueName: \"kubernetes.io/projected/161c00b9-b4f1-4c6b-9ee9-f4b056415601-kube-api-access-f87rs\") pod \"redhat-operators-5vj2x\" (UID: \"161c00b9-b4f1-4c6b-9ee9-f4b056415601\") " pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.265253 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"e0ce670c-0009-43aa-a6d2-ba22fb8b9adb","Type":"ContainerDied","Data":"4a7fb9bdac059fab1bf4056971a83261e85f042aadb37c35002771b6a0eb947f"} Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.265319 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a7fb9bdac059fab1bf4056971a83261e85f042aadb37c35002771b6a0eb947f" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.265463 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.268570 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0766743e-eb60-4e84-8c7e-b9e57d984825","Type":"ContainerStarted","Data":"04bcc7fb6412778453f42cea3d14e8b837fa37a223c54b284b301ab6454345c2"} Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.271714 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-md9zz"] Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.388511 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.801572 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.802116 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.803223 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.823733 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.836533 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.903476 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.903587 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.909985 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.910700 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.918498 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.985257 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.986522 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.992694 4731 patch_prober.go:28] interesting pod/router-default-5444994796-sclpc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:56:32 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 25 14:56:32 crc kubenswrapper[4731]: [+]process-running ok Nov 25 14:56:32 crc kubenswrapper[4731]: healthz check failed Nov 25 14:56:32 crc kubenswrapper[4731]: I1125 14:56:32.992751 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sclpc" podUID="d81b8c12-b875-47a7-b2ee-c8cac5539c87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:56:33 crc kubenswrapper[4731]: I1125 14:56:33.051623 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5vj2x"] Nov 25 14:56:33 crc kubenswrapper[4731]: I1125 14:56:33.300964 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0766743e-eb60-4e84-8c7e-b9e57d984825","Type":"ContainerStarted","Data":"02547f6f33e352fff359368293bd79858042859961858de87dbbe601ba5932c4"} Nov 25 14:56:33 crc kubenswrapper[4731]: I1125 14:56:33.317660 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" event={"ID":"bb91ed12-623e-4b63-b24d-e47c36f7ef7b","Type":"ContainerStarted","Data":"29a73f562fabe54630074f615e71e9e9ddb06b57c51b2a23f7ac4a10211ae035"} Nov 25 14:56:33 crc kubenswrapper[4731]: I1125 14:56:33.318007 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:33 crc kubenswrapper[4731]: I1125 14:56:33.331396 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vj2x" event={"ID":"161c00b9-b4f1-4c6b-9ee9-f4b056415601","Type":"ContainerStarted","Data":"c3816ea4370e0e376870cf75e6f32f4344954e5628c3faadda39a14522777b16"} Nov 25 14:56:33 crc kubenswrapper[4731]: I1125 14:56:33.358387 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" podStartSLOduration=127.358358328 podStartE2EDuration="2m7.358358328s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:56:33.354647482 +0000 UTC m=+152.101781922" watchObservedRunningTime="2025-11-25 14:56:33.358358328 +0000 UTC m=+152.105492758" Nov 25 14:56:33 crc kubenswrapper[4731]: I1125 14:56:33.360805 4731 generic.go:334] "Generic (PLEG): container finished" podID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" containerID="659f3927d58208d8d063b16dd56eb776f46e1acfadeaafcff7f64a691dbb795a" exitCode=0 Nov 25 14:56:33 crc kubenswrapper[4731]: I1125 14:56:33.360909 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-md9zz" event={"ID":"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca","Type":"ContainerDied","Data":"659f3927d58208d8d063b16dd56eb776f46e1acfadeaafcff7f64a691dbb795a"} Nov 25 14:56:33 crc kubenswrapper[4731]: I1125 14:56:33.360957 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-md9zz" event={"ID":"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca","Type":"ContainerStarted","Data":"f44c3660070b22b81d1aec91b377d8b8298562b43b2d4da5eb9bd324feb4a0ca"} Nov 25 14:56:33 crc kubenswrapper[4731]: W1125 14:56:33.774679 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-1ed04a272316231826b3da8669d440966200c0217ca213519434ea65d35296c6 WatchSource:0}: Error finding container 1ed04a272316231826b3da8669d440966200c0217ca213519434ea65d35296c6: Status 404 returned error can't find the container with id 1ed04a272316231826b3da8669d440966200c0217ca213519434ea65d35296c6 Nov 25 14:56:33 crc kubenswrapper[4731]: I1125 14:56:33.985531 4731 patch_prober.go:28] interesting pod/router-default-5444994796-sclpc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:56:33 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 25 14:56:33 crc kubenswrapper[4731]: [+]process-running ok Nov 25 14:56:33 crc kubenswrapper[4731]: healthz check failed Nov 25 14:56:33 crc kubenswrapper[4731]: I1125 14:56:33.985621 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sclpc" podUID="d81b8c12-b875-47a7-b2ee-c8cac5539c87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:56:34 crc kubenswrapper[4731]: W1125 14:56:34.000179 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-43e34bdf029acee647dd5446d5a210ac9179a9b2832ef066454752b9937a2b39 WatchSource:0}: Error finding container 43e34bdf029acee647dd5446d5a210ac9179a9b2832ef066454752b9937a2b39: Status 404 returned error can't find the container with id 43e34bdf029acee647dd5446d5a210ac9179a9b2832ef066454752b9937a2b39 Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.379667 4731 generic.go:334] "Generic (PLEG): container finished" podID="161c00b9-b4f1-4c6b-9ee9-f4b056415601" containerID="f523a6e8cfa7e88debc25592a1b197740456bb83abdda1d451eb17af9381b8fc" exitCode=0 Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.380261 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vj2x" event={"ID":"161c00b9-b4f1-4c6b-9ee9-f4b056415601","Type":"ContainerDied","Data":"f523a6e8cfa7e88debc25592a1b197740456bb83abdda1d451eb17af9381b8fc"} Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.394184 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bcefb8350bce7c4f9e8dc4f3600b958b032e1e5b46ddbf429604bfeb7130a4d7"} Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.394259 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"1ed04a272316231826b3da8669d440966200c0217ca213519434ea65d35296c6"} Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.403225 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e8cf499fedac24cbfffaad3bce3d9dbbe1947615d56b28844cdc4914fc4c5e58"} Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.403283 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"53a78dbd93de43ed26bdb1c7721153c0a7aff80e119441fdf4900436a580085b"} Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.415683 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"dbdc5114e8fe91af6a0e96f776a92abb0c9631cfb1967af87f39338660c26cb1"} Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.415747 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"43e34bdf029acee647dd5446d5a210ac9179a9b2832ef066454752b9937a2b39"} Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.416082 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.427773 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0766743e-eb60-4e84-8c7e-b9e57d984825","Type":"ContainerDied","Data":"02547f6f33e352fff359368293bd79858042859961858de87dbbe601ba5932c4"} Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.427236 4731 generic.go:334] "Generic (PLEG): container finished" podID="0766743e-eb60-4e84-8c7e-b9e57d984825" containerID="02547f6f33e352fff359368293bd79858042859961858de87dbbe601ba5932c4" exitCode=0 Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.752686 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.855977 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0766743e-eb60-4e84-8c7e-b9e57d984825-kube-api-access\") pod \"0766743e-eb60-4e84-8c7e-b9e57d984825\" (UID: \"0766743e-eb60-4e84-8c7e-b9e57d984825\") " Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.856136 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0766743e-eb60-4e84-8c7e-b9e57d984825-kubelet-dir\") pod \"0766743e-eb60-4e84-8c7e-b9e57d984825\" (UID: \"0766743e-eb60-4e84-8c7e-b9e57d984825\") " Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.856314 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0766743e-eb60-4e84-8c7e-b9e57d984825-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0766743e-eb60-4e84-8c7e-b9e57d984825" (UID: "0766743e-eb60-4e84-8c7e-b9e57d984825"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.856714 4731 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0766743e-eb60-4e84-8c7e-b9e57d984825-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.877263 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0766743e-eb60-4e84-8c7e-b9e57d984825-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0766743e-eb60-4e84-8c7e-b9e57d984825" (UID: "0766743e-eb60-4e84-8c7e-b9e57d984825"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.958677 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0766743e-eb60-4e84-8c7e-b9e57d984825-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.983644 4731 patch_prober.go:28] interesting pod/router-default-5444994796-sclpc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:56:34 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 25 14:56:34 crc kubenswrapper[4731]: [+]process-running ok Nov 25 14:56:34 crc kubenswrapper[4731]: healthz check failed Nov 25 14:56:34 crc kubenswrapper[4731]: I1125 14:56:34.983732 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sclpc" podUID="d81b8c12-b875-47a7-b2ee-c8cac5539c87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:56:35 crc kubenswrapper[4731]: I1125 14:56:35.443984 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0766743e-eb60-4e84-8c7e-b9e57d984825","Type":"ContainerDied","Data":"04bcc7fb6412778453f42cea3d14e8b837fa37a223c54b284b301ab6454345c2"} Nov 25 14:56:35 crc kubenswrapper[4731]: I1125 14:56:35.444069 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04bcc7fb6412778453f42cea3d14e8b837fa37a223c54b284b301ab6454345c2" Nov 25 14:56:35 crc kubenswrapper[4731]: I1125 14:56:35.444084 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Nov 25 14:56:35 crc kubenswrapper[4731]: I1125 14:56:35.526281 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:35 crc kubenswrapper[4731]: I1125 14:56:35.531389 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-n47rp" Nov 25 14:56:35 crc kubenswrapper[4731]: I1125 14:56:35.982233 4731 patch_prober.go:28] interesting pod/router-default-5444994796-sclpc container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Nov 25 14:56:35 crc kubenswrapper[4731]: [-]has-synced failed: reason withheld Nov 25 14:56:35 crc kubenswrapper[4731]: [+]process-running ok Nov 25 14:56:35 crc kubenswrapper[4731]: healthz check failed Nov 25 14:56:35 crc kubenswrapper[4731]: I1125 14:56:35.982302 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-sclpc" podUID="d81b8c12-b875-47a7-b2ee-c8cac5539c87" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Nov 25 14:56:36 crc kubenswrapper[4731]: I1125 14:56:36.426358 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 14:56:36 crc kubenswrapper[4731]: I1125 14:56:36.982961 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:36 crc kubenswrapper[4731]: I1125 14:56:36.987769 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-sclpc" Nov 25 14:56:37 crc kubenswrapper[4731]: I1125 14:56:37.116602 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-bspt9" Nov 25 14:56:40 crc kubenswrapper[4731]: I1125 14:56:40.887256 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:40 crc kubenswrapper[4731]: I1125 14:56:40.891422 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 14:56:41 crc kubenswrapper[4731]: I1125 14:56:41.093803 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:56:41 crc kubenswrapper[4731]: I1125 14:56:41.093910 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:56:41 crc kubenswrapper[4731]: I1125 14:56:41.094650 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:56:41 crc kubenswrapper[4731]: I1125 14:56:41.094699 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:56:49 crc kubenswrapper[4731]: I1125 14:56:49.098461 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs\") pod \"network-metrics-daemon-7xvpg\" (UID: \"12559634-427c-4ce7-b9e8-d2dd37299642\") " pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:56:49 crc kubenswrapper[4731]: I1125 14:56:49.106855 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/12559634-427c-4ce7-b9e8-d2dd37299642-metrics-certs\") pod \"network-metrics-daemon-7xvpg\" (UID: \"12559634-427c-4ce7-b9e8-d2dd37299642\") " pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:56:49 crc kubenswrapper[4731]: I1125 14:56:49.235122 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-7xvpg" Nov 25 14:56:50 crc kubenswrapper[4731]: I1125 14:56:50.040643 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:56:50 crc kubenswrapper[4731]: I1125 14:56:50.040770 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:56:51 crc kubenswrapper[4731]: I1125 14:56:51.093115 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:56:51 crc kubenswrapper[4731]: I1125 14:56:51.093199 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:56:51 crc kubenswrapper[4731]: I1125 14:56:51.093223 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:56:51 crc kubenswrapper[4731]: I1125 14:56:51.093257 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:56:51 crc kubenswrapper[4731]: I1125 14:56:51.093299 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-nzzxp" Nov 25 14:56:51 crc kubenswrapper[4731]: I1125 14:56:51.094076 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:56:51 crc kubenswrapper[4731]: I1125 14:56:51.094074 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"b0a1c04442e371bedc0c7ffbf9545aaff16bb5b05c1b4a559db0b68892b4dab0"} pod="openshift-console/downloads-7954f5f757-nzzxp" containerMessage="Container download-server failed liveness probe, will be restarted" Nov 25 14:56:51 crc kubenswrapper[4731]: I1125 14:56:51.094207 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" containerID="cri-o://b0a1c04442e371bedc0c7ffbf9545aaff16bb5b05c1b4a559db0b68892b4dab0" gracePeriod=2 Nov 25 14:56:51 crc kubenswrapper[4731]: I1125 14:56:51.094187 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:56:51 crc kubenswrapper[4731]: I1125 14:56:51.559360 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 14:56:52 crc kubenswrapper[4731]: I1125 14:56:52.578123 4731 generic.go:334] "Generic (PLEG): container finished" podID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerID="b0a1c04442e371bedc0c7ffbf9545aaff16bb5b05c1b4a559db0b68892b4dab0" exitCode=0 Nov 25 14:56:52 crc kubenswrapper[4731]: I1125 14:56:52.578174 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nzzxp" event={"ID":"fc54c4d0-18b0-4249-9e18-d8a31c0135ae","Type":"ContainerDied","Data":"b0a1c04442e371bedc0c7ffbf9545aaff16bb5b05c1b4a559db0b68892b4dab0"} Nov 25 14:57:01 crc kubenswrapper[4731]: I1125 14:57:01.094014 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:57:01 crc kubenswrapper[4731]: I1125 14:57:01.095079 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:57:02 crc kubenswrapper[4731]: I1125 14:57:02.011440 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5ldqz" Nov 25 14:57:11 crc kubenswrapper[4731]: I1125 14:57:11.094101 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:57:11 crc kubenswrapper[4731]: I1125 14:57:11.094785 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:57:12 crc kubenswrapper[4731]: E1125 14:57:12.436617 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 14:57:12 crc kubenswrapper[4731]: E1125 14:57:12.437174 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-484x8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-zjpwq_openshift-marketplace(4c945221-c3ce-443a-82c2-5eb45a00f65c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:57:12 crc kubenswrapper[4731]: E1125 14:57:12.438483 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-zjpwq" podUID="4c945221-c3ce-443a-82c2-5eb45a00f65c" Nov 25 14:57:12 crc kubenswrapper[4731]: I1125 14:57:12.996035 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Nov 25 14:57:14 crc kubenswrapper[4731]: E1125 14:57:14.111075 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-zjpwq" podUID="4c945221-c3ce-443a-82c2-5eb45a00f65c" Nov 25 14:57:14 crc kubenswrapper[4731]: E1125 14:57:14.314232 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 14:57:14 crc kubenswrapper[4731]: E1125 14:57:14.314686 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m9lvs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-cr25s_openshift-marketplace(525c51bb-0303-4b1b-aeb9-593fb77df6af): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:57:14 crc kubenswrapper[4731]: E1125 14:57:14.316535 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-cr25s" podUID="525c51bb-0303-4b1b-aeb9-593fb77df6af" Nov 25 14:57:14 crc kubenswrapper[4731]: E1125 14:57:14.497720 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Nov 25 14:57:14 crc kubenswrapper[4731]: E1125 14:57:14.498013 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-twc9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vp8cm_openshift-marketplace(c1b339ba-eeea-496d-a2b4-e5b96f4f2380): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:57:14 crc kubenswrapper[4731]: E1125 14:57:14.499990 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vp8cm" podUID="c1b339ba-eeea-496d-a2b4-e5b96f4f2380" Nov 25 14:57:15 crc kubenswrapper[4731]: E1125 14:57:15.335516 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vp8cm" podUID="c1b339ba-eeea-496d-a2b4-e5b96f4f2380" Nov 25 14:57:15 crc kubenswrapper[4731]: E1125 14:57:15.336358 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-cr25s" podUID="525c51bb-0303-4b1b-aeb9-593fb77df6af" Nov 25 14:57:15 crc kubenswrapper[4731]: E1125 14:57:15.416498 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 14:57:15 crc kubenswrapper[4731]: E1125 14:57:15.416768 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qszvq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-qj7cc_openshift-marketplace(9d0d6998-e333-4b4e-93c3-d2d346675f07): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:57:15 crc kubenswrapper[4731]: E1125 14:57:15.418293 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-qj7cc" podUID="9d0d6998-e333-4b4e-93c3-d2d346675f07" Nov 25 14:57:16 crc kubenswrapper[4731]: I1125 14:57:16.177666 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 14:57:16 crc kubenswrapper[4731]: E1125 14:57:16.178305 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0766743e-eb60-4e84-8c7e-b9e57d984825" containerName="pruner" Nov 25 14:57:16 crc kubenswrapper[4731]: I1125 14:57:16.178319 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="0766743e-eb60-4e84-8c7e-b9e57d984825" containerName="pruner" Nov 25 14:57:16 crc kubenswrapper[4731]: I1125 14:57:16.178466 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="0766743e-eb60-4e84-8c7e-b9e57d984825" containerName="pruner" Nov 25 14:57:16 crc kubenswrapper[4731]: I1125 14:57:16.178927 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:57:16 crc kubenswrapper[4731]: I1125 14:57:16.184430 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 14:57:16 crc kubenswrapper[4731]: I1125 14:57:16.190450 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Nov 25 14:57:16 crc kubenswrapper[4731]: I1125 14:57:16.190651 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Nov 25 14:57:16 crc kubenswrapper[4731]: I1125 14:57:16.303076 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00171e21-66c4-4fae-a582-add16a773996-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"00171e21-66c4-4fae-a582-add16a773996\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:57:16 crc kubenswrapper[4731]: I1125 14:57:16.303165 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00171e21-66c4-4fae-a582-add16a773996-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"00171e21-66c4-4fae-a582-add16a773996\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:57:16 crc kubenswrapper[4731]: I1125 14:57:16.404741 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00171e21-66c4-4fae-a582-add16a773996-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"00171e21-66c4-4fae-a582-add16a773996\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:57:16 crc kubenswrapper[4731]: I1125 14:57:16.404837 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00171e21-66c4-4fae-a582-add16a773996-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"00171e21-66c4-4fae-a582-add16a773996\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:57:16 crc kubenswrapper[4731]: I1125 14:57:16.405364 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00171e21-66c4-4fae-a582-add16a773996-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"00171e21-66c4-4fae-a582-add16a773996\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:57:16 crc kubenswrapper[4731]: I1125 14:57:16.444915 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00171e21-66c4-4fae-a582-add16a773996-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"00171e21-66c4-4fae-a582-add16a773996\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:57:16 crc kubenswrapper[4731]: I1125 14:57:16.519144 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.444456 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-qj7cc" podUID="9d0d6998-e333-4b4e-93c3-d2d346675f07" Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.460298 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.460491 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f87rs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5vj2x_openshift-marketplace(161c00b9-b4f1-4c6b-9ee9-f4b056415601): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.462780 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5vj2x" podUID="161c00b9-b4f1-4c6b-9ee9-f4b056415601" Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.523397 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.523593 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r7qk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-md9zz_openshift-marketplace(6c85939e-8d19-4f0a-8ad9-2dd4f29717ca): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.524808 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-md9zz" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.548885 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.549423 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c27k8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-q9p29_openshift-marketplace(b750a03d-5b79-4d0c-b569-674089d31556): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.550753 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-q9p29" podUID="b750a03d-5b79-4d0c-b569-674089d31556" Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.560607 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.560882 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrffz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mfskb_openshift-marketplace(a55f5a5e-8ee3-461e-81b3-78c56a873782): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.562136 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mfskb" podUID="a55f5a5e-8ee3-461e-81b3-78c56a873782" Nov 25 14:57:18 crc kubenswrapper[4731]: I1125 14:57:18.740642 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-nzzxp" event={"ID":"fc54c4d0-18b0-4249-9e18-d8a31c0135ae","Type":"ContainerStarted","Data":"19f709fade9fad34c5fa4754016dce228207fd77a587072939cb91d844508965"} Nov 25 14:57:18 crc kubenswrapper[4731]: I1125 14:57:18.744024 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.744127 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5vj2x" podUID="161c00b9-b4f1-4c6b-9ee9-f4b056415601" Nov 25 14:57:18 crc kubenswrapper[4731]: I1125 14:57:18.744121 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.744469 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-md9zz" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.744543 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-q9p29" podUID="b750a03d-5b79-4d0c-b569-674089d31556" Nov 25 14:57:18 crc kubenswrapper[4731]: E1125 14:57:18.745009 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mfskb" podUID="a55f5a5e-8ee3-461e-81b3-78c56a873782" Nov 25 14:57:18 crc kubenswrapper[4731]: I1125 14:57:18.790576 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Nov 25 14:57:18 crc kubenswrapper[4731]: I1125 14:57:18.900483 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-7xvpg"] Nov 25 14:57:18 crc kubenswrapper[4731]: W1125 14:57:18.915213 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12559634_427c_4ce7_b9e8_d2dd37299642.slice/crio-bc5b9b4884d94b95abc283c5522dde3d6b3365f9dfcec795c124214bb0337a0e WatchSource:0}: Error finding container bc5b9b4884d94b95abc283c5522dde3d6b3365f9dfcec795c124214bb0337a0e: Status 404 returned error can't find the container with id bc5b9b4884d94b95abc283c5522dde3d6b3365f9dfcec795c124214bb0337a0e Nov 25 14:57:19 crc kubenswrapper[4731]: I1125 14:57:19.757608 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" event={"ID":"12559634-427c-4ce7-b9e8-d2dd37299642","Type":"ContainerStarted","Data":"1b2911c5ff28ae688cdb970c7f947e8f7ff16faa9cf3725d91fa8d3ca6cca185"} Nov 25 14:57:19 crc kubenswrapper[4731]: I1125 14:57:19.758512 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" event={"ID":"12559634-427c-4ce7-b9e8-d2dd37299642","Type":"ContainerStarted","Data":"4063b4a1e2e10537f9a1c62f7bd575c6c75afb023ec8c10f9753e6d19bf37d4d"} Nov 25 14:57:19 crc kubenswrapper[4731]: I1125 14:57:19.758531 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-7xvpg" event={"ID":"12559634-427c-4ce7-b9e8-d2dd37299642","Type":"ContainerStarted","Data":"bc5b9b4884d94b95abc283c5522dde3d6b3365f9dfcec795c124214bb0337a0e"} Nov 25 14:57:19 crc kubenswrapper[4731]: I1125 14:57:19.763373 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"00171e21-66c4-4fae-a582-add16a773996","Type":"ContainerStarted","Data":"cbdacf472263b0bf8557bebf036f7de3899272dbf968a6d7fda79ef75e4ff272"} Nov 25 14:57:19 crc kubenswrapper[4731]: I1125 14:57:19.763439 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-nzzxp" Nov 25 14:57:19 crc kubenswrapper[4731]: I1125 14:57:19.763456 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"00171e21-66c4-4fae-a582-add16a773996","Type":"ContainerStarted","Data":"98ff8018037b8e57b43d25dfed0bb47049ed3aee40629bb393964f641a1dd372"} Nov 25 14:57:19 crc kubenswrapper[4731]: I1125 14:57:19.763681 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:57:19 crc kubenswrapper[4731]: I1125 14:57:19.763718 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:57:19 crc kubenswrapper[4731]: I1125 14:57:19.777765 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-7xvpg" podStartSLOduration=173.777744182 podStartE2EDuration="2m53.777744182s" podCreationTimestamp="2025-11-25 14:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:57:19.775863741 +0000 UTC m=+198.522998181" watchObservedRunningTime="2025-11-25 14:57:19.777744182 +0000 UTC m=+198.524878612" Nov 25 14:57:19 crc kubenswrapper[4731]: I1125 14:57:19.795139 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=3.795113391 podStartE2EDuration="3.795113391s" podCreationTimestamp="2025-11-25 14:57:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:57:19.79361593 +0000 UTC m=+198.540750360" watchObservedRunningTime="2025-11-25 14:57:19.795113391 +0000 UTC m=+198.542247821" Nov 25 14:57:20 crc kubenswrapper[4731]: I1125 14:57:20.039596 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:57:20 crc kubenswrapper[4731]: I1125 14:57:20.040048 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:57:20 crc kubenswrapper[4731]: I1125 14:57:20.769948 4731 generic.go:334] "Generic (PLEG): container finished" podID="00171e21-66c4-4fae-a582-add16a773996" containerID="cbdacf472263b0bf8557bebf036f7de3899272dbf968a6d7fda79ef75e4ff272" exitCode=0 Nov 25 14:57:20 crc kubenswrapper[4731]: I1125 14:57:20.770054 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"00171e21-66c4-4fae-a582-add16a773996","Type":"ContainerDied","Data":"cbdacf472263b0bf8557bebf036f7de3899272dbf968a6d7fda79ef75e4ff272"} Nov 25 14:57:20 crc kubenswrapper[4731]: I1125 14:57:20.770428 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:57:20 crc kubenswrapper[4731]: I1125 14:57:20.770482 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.092931 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.092981 4731 patch_prober.go:28] interesting pod/downloads-7954f5f757-nzzxp container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.093363 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.093296 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-nzzxp" podUID="fc54c4d0-18b0-4249-9e18-d8a31c0135ae" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.165206 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.166116 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.178416 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.280563 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-kube-api-access\") pod \"installer-9-crc\" (UID: \"0d487e2a-09a0-47ff-90eb-0ed913b60ba0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.280639 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0d487e2a-09a0-47ff-90eb-0ed913b60ba0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.280680 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-var-lock\") pod \"installer-9-crc\" (UID: \"0d487e2a-09a0-47ff-90eb-0ed913b60ba0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.382144 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-kube-api-access\") pod \"installer-9-crc\" (UID: \"0d487e2a-09a0-47ff-90eb-0ed913b60ba0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.382486 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0d487e2a-09a0-47ff-90eb-0ed913b60ba0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.382598 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-kubelet-dir\") pod \"installer-9-crc\" (UID: \"0d487e2a-09a0-47ff-90eb-0ed913b60ba0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.382615 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-var-lock\") pod \"installer-9-crc\" (UID: \"0d487e2a-09a0-47ff-90eb-0ed913b60ba0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.382830 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-var-lock\") pod \"installer-9-crc\" (UID: \"0d487e2a-09a0-47ff-90eb-0ed913b60ba0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.407135 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-kube-api-access\") pod \"installer-9-crc\" (UID: \"0d487e2a-09a0-47ff-90eb-0ed913b60ba0\") " pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.535770 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:57:21 crc kubenswrapper[4731]: I1125 14:57:21.958251 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Nov 25 14:57:22 crc kubenswrapper[4731]: I1125 14:57:22.054652 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:57:22 crc kubenswrapper[4731]: I1125 14:57:22.100056 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00171e21-66c4-4fae-a582-add16a773996-kubelet-dir\") pod \"00171e21-66c4-4fae-a582-add16a773996\" (UID: \"00171e21-66c4-4fae-a582-add16a773996\") " Nov 25 14:57:22 crc kubenswrapper[4731]: I1125 14:57:22.100140 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00171e21-66c4-4fae-a582-add16a773996-kube-api-access\") pod \"00171e21-66c4-4fae-a582-add16a773996\" (UID: \"00171e21-66c4-4fae-a582-add16a773996\") " Nov 25 14:57:22 crc kubenswrapper[4731]: I1125 14:57:22.100217 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00171e21-66c4-4fae-a582-add16a773996-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "00171e21-66c4-4fae-a582-add16a773996" (UID: "00171e21-66c4-4fae-a582-add16a773996"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:57:22 crc kubenswrapper[4731]: I1125 14:57:22.100521 4731 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/00171e21-66c4-4fae-a582-add16a773996-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:22 crc kubenswrapper[4731]: I1125 14:57:22.108241 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00171e21-66c4-4fae-a582-add16a773996-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "00171e21-66c4-4fae-a582-add16a773996" (UID: "00171e21-66c4-4fae-a582-add16a773996"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:57:22 crc kubenswrapper[4731]: I1125 14:57:22.201519 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/00171e21-66c4-4fae-a582-add16a773996-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:22 crc kubenswrapper[4731]: I1125 14:57:22.784105 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0d487e2a-09a0-47ff-90eb-0ed913b60ba0","Type":"ContainerStarted","Data":"52b0f5acc4d9e1159b87d5f404bd458e2de8c1096651a2c67ec02402dda7973d"} Nov 25 14:57:22 crc kubenswrapper[4731]: I1125 14:57:22.784462 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0d487e2a-09a0-47ff-90eb-0ed913b60ba0","Type":"ContainerStarted","Data":"79920496e769220c79c9dd673eeeb55b99ac38de60a914004cac61b945cdaef6"} Nov 25 14:57:22 crc kubenswrapper[4731]: I1125 14:57:22.786687 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"00171e21-66c4-4fae-a582-add16a773996","Type":"ContainerDied","Data":"98ff8018037b8e57b43d25dfed0bb47049ed3aee40629bb393964f641a1dd372"} Nov 25 14:57:22 crc kubenswrapper[4731]: I1125 14:57:22.786726 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98ff8018037b8e57b43d25dfed0bb47049ed3aee40629bb393964f641a1dd372" Nov 25 14:57:22 crc kubenswrapper[4731]: I1125 14:57:22.786738 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Nov 25 14:57:22 crc kubenswrapper[4731]: I1125 14:57:22.809862 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=1.8098071770000002 podStartE2EDuration="1.809807177s" podCreationTimestamp="2025-11-25 14:57:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:57:22.805156902 +0000 UTC m=+201.552291332" watchObservedRunningTime="2025-11-25 14:57:22.809807177 +0000 UTC m=+201.556941607" Nov 25 14:57:29 crc kubenswrapper[4731]: I1125 14:57:29.842062 4731 generic.go:334] "Generic (PLEG): container finished" podID="4c945221-c3ce-443a-82c2-5eb45a00f65c" containerID="f4762565231db80eccba2ce37dabeb77a1c77ff5c11441979f562487b6af0774" exitCode=0 Nov 25 14:57:29 crc kubenswrapper[4731]: I1125 14:57:29.842186 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zjpwq" event={"ID":"4c945221-c3ce-443a-82c2-5eb45a00f65c","Type":"ContainerDied","Data":"f4762565231db80eccba2ce37dabeb77a1c77ff5c11441979f562487b6af0774"} Nov 25 14:57:30 crc kubenswrapper[4731]: I1125 14:57:30.854668 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zjpwq" event={"ID":"4c945221-c3ce-443a-82c2-5eb45a00f65c","Type":"ContainerStarted","Data":"88715a7897c189c6b2a44219c42ec3c4d8d331ca62857438384f8c6ff0748c4a"} Nov 25 14:57:30 crc kubenswrapper[4731]: I1125 14:57:30.858568 4731 generic.go:334] "Generic (PLEG): container finished" podID="525c51bb-0303-4b1b-aeb9-593fb77df6af" containerID="284e5a88a6e46d2446de49dfce7cce7f990f4f243ea43f5ec04330753cd687d4" exitCode=0 Nov 25 14:57:30 crc kubenswrapper[4731]: I1125 14:57:30.858639 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cr25s" event={"ID":"525c51bb-0303-4b1b-aeb9-593fb77df6af","Type":"ContainerDied","Data":"284e5a88a6e46d2446de49dfce7cce7f990f4f243ea43f5ec04330753cd687d4"} Nov 25 14:57:30 crc kubenswrapper[4731]: I1125 14:57:30.902975 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zjpwq" podStartSLOduration=2.534126888 podStartE2EDuration="1m2.902946987s" podCreationTimestamp="2025-11-25 14:56:28 +0000 UTC" firstStartedPulling="2025-11-25 14:56:30.12185711 +0000 UTC m=+148.868991540" lastFinishedPulling="2025-11-25 14:57:30.490677209 +0000 UTC m=+209.237811639" observedRunningTime="2025-11-25 14:57:30.879206985 +0000 UTC m=+209.626341425" watchObservedRunningTime="2025-11-25 14:57:30.902946987 +0000 UTC m=+209.650081417" Nov 25 14:57:31 crc kubenswrapper[4731]: I1125 14:57:31.117765 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-nzzxp" Nov 25 14:57:31 crc kubenswrapper[4731]: I1125 14:57:31.869963 4731 generic.go:334] "Generic (PLEG): container finished" podID="9d0d6998-e333-4b4e-93c3-d2d346675f07" containerID="1311e651b90d08212e71c0a93d52e0667021b5a03ff22461c6372f56f11dab62" exitCode=0 Nov 25 14:57:31 crc kubenswrapper[4731]: I1125 14:57:31.870088 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj7cc" event={"ID":"9d0d6998-e333-4b4e-93c3-d2d346675f07","Type":"ContainerDied","Data":"1311e651b90d08212e71c0a93d52e0667021b5a03ff22461c6372f56f11dab62"} Nov 25 14:57:31 crc kubenswrapper[4731]: I1125 14:57:31.874255 4731 generic.go:334] "Generic (PLEG): container finished" podID="a55f5a5e-8ee3-461e-81b3-78c56a873782" containerID="5dc074c6284fa54a06148fdfeb779da8ac829fe9fb4e01273281afb4734b7921" exitCode=0 Nov 25 14:57:31 crc kubenswrapper[4731]: I1125 14:57:31.874330 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfskb" event={"ID":"a55f5a5e-8ee3-461e-81b3-78c56a873782","Type":"ContainerDied","Data":"5dc074c6284fa54a06148fdfeb779da8ac829fe9fb4e01273281afb4734b7921"} Nov 25 14:57:31 crc kubenswrapper[4731]: I1125 14:57:31.878059 4731 generic.go:334] "Generic (PLEG): container finished" podID="c1b339ba-eeea-496d-a2b4-e5b96f4f2380" containerID="e0961282890cbc7969db242e1ae7092e22fef60bad6bb40a08d647246def1011" exitCode=0 Nov 25 14:57:31 crc kubenswrapper[4731]: I1125 14:57:31.878176 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp8cm" event={"ID":"c1b339ba-eeea-496d-a2b4-e5b96f4f2380","Type":"ContainerDied","Data":"e0961282890cbc7969db242e1ae7092e22fef60bad6bb40a08d647246def1011"} Nov 25 14:57:32 crc kubenswrapper[4731]: I1125 14:57:32.885938 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp8cm" event={"ID":"c1b339ba-eeea-496d-a2b4-e5b96f4f2380","Type":"ContainerStarted","Data":"7fbaa541a2b1b9c407dac423bcbfa6613225f8b496e10148df0a1b7f52988927"} Nov 25 14:57:32 crc kubenswrapper[4731]: I1125 14:57:32.888042 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cr25s" event={"ID":"525c51bb-0303-4b1b-aeb9-593fb77df6af","Type":"ContainerStarted","Data":"dbdec462140c3000db8c7f67713f3e8d57d6b4cc175e0ce7504a9cdc9b003551"} Nov 25 14:57:32 crc kubenswrapper[4731]: I1125 14:57:32.890196 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj7cc" event={"ID":"9d0d6998-e333-4b4e-93c3-d2d346675f07","Type":"ContainerStarted","Data":"21b3a5928d1bac60766588fce343ca3908b0d548e24d88624888e7d625d72b25"} Nov 25 14:57:32 crc kubenswrapper[4731]: I1125 14:57:32.892310 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfskb" event={"ID":"a55f5a5e-8ee3-461e-81b3-78c56a873782","Type":"ContainerStarted","Data":"92423d3b592d4da2b96bd79c583809c9b465756240a644c4c6a811629b25a10e"} Nov 25 14:57:32 crc kubenswrapper[4731]: I1125 14:57:32.894065 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vj2x" event={"ID":"161c00b9-b4f1-4c6b-9ee9-f4b056415601","Type":"ContainerStarted","Data":"c9cb6f57e68d763e375b8bf9673939d6e419d1ab1ab2c150a4c0d7ea5a511a0e"} Nov 25 14:57:32 crc kubenswrapper[4731]: I1125 14:57:32.916843 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vp8cm" podStartSLOduration=3.738923778 podStartE2EDuration="1m4.916806804s" podCreationTimestamp="2025-11-25 14:56:28 +0000 UTC" firstStartedPulling="2025-11-25 14:56:31.160720427 +0000 UTC m=+149.907854857" lastFinishedPulling="2025-11-25 14:57:32.338603453 +0000 UTC m=+211.085737883" observedRunningTime="2025-11-25 14:57:32.913753732 +0000 UTC m=+211.660888152" watchObservedRunningTime="2025-11-25 14:57:32.916806804 +0000 UTC m=+211.663941234" Nov 25 14:57:32 crc kubenswrapper[4731]: I1125 14:57:32.938404 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mfskb" podStartSLOduration=2.864026195 podStartE2EDuration="1m2.938383297s" podCreationTimestamp="2025-11-25 14:56:30 +0000 UTC" firstStartedPulling="2025-11-25 14:56:32.229003589 +0000 UTC m=+150.976138009" lastFinishedPulling="2025-11-25 14:57:32.303360681 +0000 UTC m=+211.050495111" observedRunningTime="2025-11-25 14:57:32.936337232 +0000 UTC m=+211.683471662" watchObservedRunningTime="2025-11-25 14:57:32.938383297 +0000 UTC m=+211.685517727" Nov 25 14:57:32 crc kubenswrapper[4731]: I1125 14:57:32.960641 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qj7cc" podStartSLOduration=2.785878154 podStartE2EDuration="1m2.960615318s" podCreationTimestamp="2025-11-25 14:56:30 +0000 UTC" firstStartedPulling="2025-11-25 14:56:32.180589711 +0000 UTC m=+150.927724141" lastFinishedPulling="2025-11-25 14:57:32.355326865 +0000 UTC m=+211.102461305" observedRunningTime="2025-11-25 14:57:32.955845809 +0000 UTC m=+211.702980229" watchObservedRunningTime="2025-11-25 14:57:32.960615318 +0000 UTC m=+211.707749748" Nov 25 14:57:32 crc kubenswrapper[4731]: I1125 14:57:32.976901 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cr25s" podStartSLOduration=4.482828208 podStartE2EDuration="1m4.976878787s" podCreationTimestamp="2025-11-25 14:56:28 +0000 UTC" firstStartedPulling="2025-11-25 14:56:31.148572381 +0000 UTC m=+149.895706821" lastFinishedPulling="2025-11-25 14:57:31.64262297 +0000 UTC m=+210.389757400" observedRunningTime="2025-11-25 14:57:32.974120663 +0000 UTC m=+211.721255093" watchObservedRunningTime="2025-11-25 14:57:32.976878787 +0000 UTC m=+211.724013207" Nov 25 14:57:33 crc kubenswrapper[4731]: I1125 14:57:33.901646 4731 generic.go:334] "Generic (PLEG): container finished" podID="161c00b9-b4f1-4c6b-9ee9-f4b056415601" containerID="c9cb6f57e68d763e375b8bf9673939d6e419d1ab1ab2c150a4c0d7ea5a511a0e" exitCode=0 Nov 25 14:57:33 crc kubenswrapper[4731]: I1125 14:57:33.901725 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vj2x" event={"ID":"161c00b9-b4f1-4c6b-9ee9-f4b056415601","Type":"ContainerDied","Data":"c9cb6f57e68d763e375b8bf9673939d6e419d1ab1ab2c150a4c0d7ea5a511a0e"} Nov 25 14:57:38 crc kubenswrapper[4731]: I1125 14:57:38.791583 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:57:38 crc kubenswrapper[4731]: I1125 14:57:38.792379 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:57:39 crc kubenswrapper[4731]: I1125 14:57:39.099169 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:57:39 crc kubenswrapper[4731]: I1125 14:57:39.099234 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:57:39 crc kubenswrapper[4731]: I1125 14:57:39.262040 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:57:39 crc kubenswrapper[4731]: I1125 14:57:39.264467 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:57:39 crc kubenswrapper[4731]: I1125 14:57:39.338852 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:57:39 crc kubenswrapper[4731]: I1125 14:57:39.338940 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:57:39 crc kubenswrapper[4731]: I1125 14:57:39.384086 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:57:39 crc kubenswrapper[4731]: I1125 14:57:39.946471 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vj2x" event={"ID":"161c00b9-b4f1-4c6b-9ee9-f4b056415601","Type":"ContainerStarted","Data":"eebe78c9e2530ea35c102c4825695b6a841f907e36877f533cf0b2d7d387157f"} Nov 25 14:57:39 crc kubenswrapper[4731]: I1125 14:57:39.950610 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-md9zz" event={"ID":"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca","Type":"ContainerStarted","Data":"c54322ca9fc164fcfddd2f3c8e7252e16306197f4a4e4b9105b89a9a5dc98485"} Nov 25 14:57:40 crc kubenswrapper[4731]: I1125 14:57:40.006257 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:57:40 crc kubenswrapper[4731]: I1125 14:57:40.008680 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:57:40 crc kubenswrapper[4731]: I1125 14:57:40.013142 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:57:40 crc kubenswrapper[4731]: I1125 14:57:40.750077 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:57:40 crc kubenswrapper[4731]: I1125 14:57:40.750181 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:57:40 crc kubenswrapper[4731]: I1125 14:57:40.831514 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:57:40 crc kubenswrapper[4731]: I1125 14:57:40.959552 4731 generic.go:334] "Generic (PLEG): container finished" podID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" containerID="c54322ca9fc164fcfddd2f3c8e7252e16306197f4a4e4b9105b89a9a5dc98485" exitCode=0 Nov 25 14:57:40 crc kubenswrapper[4731]: I1125 14:57:40.959727 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-md9zz" event={"ID":"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca","Type":"ContainerDied","Data":"c54322ca9fc164fcfddd2f3c8e7252e16306197f4a4e4b9105b89a9a5dc98485"} Nov 25 14:57:40 crc kubenswrapper[4731]: I1125 14:57:40.982685 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5vj2x" podStartSLOduration=5.452857035 podStartE2EDuration="1m9.982662686s" podCreationTimestamp="2025-11-25 14:56:31 +0000 UTC" firstStartedPulling="2025-11-25 14:56:34.383559879 +0000 UTC m=+153.130694309" lastFinishedPulling="2025-11-25 14:57:38.91336553 +0000 UTC m=+217.660499960" observedRunningTime="2025-11-25 14:57:40.980797106 +0000 UTC m=+219.727931556" watchObservedRunningTime="2025-11-25 14:57:40.982662686 +0000 UTC m=+219.729797116" Nov 25 14:57:41 crc kubenswrapper[4731]: I1125 14:57:41.002138 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:57:41 crc kubenswrapper[4731]: I1125 14:57:41.093073 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:57:41 crc kubenswrapper[4731]: I1125 14:57:41.093112 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:57:41 crc kubenswrapper[4731]: I1125 14:57:41.141738 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:57:41 crc kubenswrapper[4731]: I1125 14:57:41.803618 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cr25s"] Nov 25 14:57:41 crc kubenswrapper[4731]: I1125 14:57:41.968194 4731 generic.go:334] "Generic (PLEG): container finished" podID="b750a03d-5b79-4d0c-b569-674089d31556" containerID="d987f63c18b798da76c3504ce4c85f57e4cc97f49914eca9afe50e3b7820355b" exitCode=0 Nov 25 14:57:41 crc kubenswrapper[4731]: I1125 14:57:41.968954 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9p29" event={"ID":"b750a03d-5b79-4d0c-b569-674089d31556","Type":"ContainerDied","Data":"d987f63c18b798da76c3504ce4c85f57e4cc97f49914eca9afe50e3b7820355b"} Nov 25 14:57:42 crc kubenswrapper[4731]: I1125 14:57:42.037963 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:57:42 crc kubenswrapper[4731]: I1125 14:57:42.389157 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:57:42 crc kubenswrapper[4731]: I1125 14:57:42.389641 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:57:42 crc kubenswrapper[4731]: I1125 14:57:42.408896 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vp8cm"] Nov 25 14:57:42 crc kubenswrapper[4731]: I1125 14:57:42.409275 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vp8cm" podUID="c1b339ba-eeea-496d-a2b4-e5b96f4f2380" containerName="registry-server" containerID="cri-o://7fbaa541a2b1b9c407dac423bcbfa6613225f8b496e10148df0a1b7f52988927" gracePeriod=2 Nov 25 14:57:42 crc kubenswrapper[4731]: I1125 14:57:42.973735 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cr25s" podUID="525c51bb-0303-4b1b-aeb9-593fb77df6af" containerName="registry-server" containerID="cri-o://dbdec462140c3000db8c7f67713f3e8d57d6b4cc175e0ce7504a9cdc9b003551" gracePeriod=2 Nov 25 14:57:43 crc kubenswrapper[4731]: I1125 14:57:43.441119 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5vj2x" podUID="161c00b9-b4f1-4c6b-9ee9-f4b056415601" containerName="registry-server" probeResult="failure" output=< Nov 25 14:57:43 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 14:57:43 crc kubenswrapper[4731]: > Nov 25 14:57:44 crc kubenswrapper[4731]: I1125 14:57:44.204673 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfskb"] Nov 25 14:57:44 crc kubenswrapper[4731]: I1125 14:57:44.205999 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mfskb" podUID="a55f5a5e-8ee3-461e-81b3-78c56a873782" containerName="registry-server" containerID="cri-o://92423d3b592d4da2b96bd79c583809c9b465756240a644c4c6a811629b25a10e" gracePeriod=2 Nov 25 14:57:44 crc kubenswrapper[4731]: I1125 14:57:44.987469 4731 generic.go:334] "Generic (PLEG): container finished" podID="525c51bb-0303-4b1b-aeb9-593fb77df6af" containerID="dbdec462140c3000db8c7f67713f3e8d57d6b4cc175e0ce7504a9cdc9b003551" exitCode=0 Nov 25 14:57:44 crc kubenswrapper[4731]: I1125 14:57:44.987542 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cr25s" event={"ID":"525c51bb-0303-4b1b-aeb9-593fb77df6af","Type":"ContainerDied","Data":"dbdec462140c3000db8c7f67713f3e8d57d6b4cc175e0ce7504a9cdc9b003551"} Nov 25 14:57:44 crc kubenswrapper[4731]: I1125 14:57:44.990722 4731 generic.go:334] "Generic (PLEG): container finished" podID="c1b339ba-eeea-496d-a2b4-e5b96f4f2380" containerID="7fbaa541a2b1b9c407dac423bcbfa6613225f8b496e10148df0a1b7f52988927" exitCode=0 Nov 25 14:57:44 crc kubenswrapper[4731]: I1125 14:57:44.990805 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp8cm" event={"ID":"c1b339ba-eeea-496d-a2b4-e5b96f4f2380","Type":"ContainerDied","Data":"7fbaa541a2b1b9c407dac423bcbfa6613225f8b496e10148df0a1b7f52988927"} Nov 25 14:57:45 crc kubenswrapper[4731]: I1125 14:57:45.591222 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:57:45 crc kubenswrapper[4731]: I1125 14:57:45.761984 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-utilities\") pod \"c1b339ba-eeea-496d-a2b4-e5b96f4f2380\" (UID: \"c1b339ba-eeea-496d-a2b4-e5b96f4f2380\") " Nov 25 14:57:45 crc kubenswrapper[4731]: I1125 14:57:45.762035 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twc9g\" (UniqueName: \"kubernetes.io/projected/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-kube-api-access-twc9g\") pod \"c1b339ba-eeea-496d-a2b4-e5b96f4f2380\" (UID: \"c1b339ba-eeea-496d-a2b4-e5b96f4f2380\") " Nov 25 14:57:45 crc kubenswrapper[4731]: I1125 14:57:45.762107 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-catalog-content\") pod \"c1b339ba-eeea-496d-a2b4-e5b96f4f2380\" (UID: \"c1b339ba-eeea-496d-a2b4-e5b96f4f2380\") " Nov 25 14:57:45 crc kubenswrapper[4731]: I1125 14:57:45.763076 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-utilities" (OuterVolumeSpecName: "utilities") pod "c1b339ba-eeea-496d-a2b4-e5b96f4f2380" (UID: "c1b339ba-eeea-496d-a2b4-e5b96f4f2380"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:57:45 crc kubenswrapper[4731]: I1125 14:57:45.770780 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-kube-api-access-twc9g" (OuterVolumeSpecName: "kube-api-access-twc9g") pod "c1b339ba-eeea-496d-a2b4-e5b96f4f2380" (UID: "c1b339ba-eeea-496d-a2b4-e5b96f4f2380"). InnerVolumeSpecName "kube-api-access-twc9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:57:45 crc kubenswrapper[4731]: I1125 14:57:45.863776 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:45 crc kubenswrapper[4731]: I1125 14:57:45.863830 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twc9g\" (UniqueName: \"kubernetes.io/projected/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-kube-api-access-twc9g\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:46 crc kubenswrapper[4731]: I1125 14:57:46.000099 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vp8cm" event={"ID":"c1b339ba-eeea-496d-a2b4-e5b96f4f2380","Type":"ContainerDied","Data":"ce7ce700f7870b59c22a50486395c84253f36b1a696ad11612a25b6e4d575357"} Nov 25 14:57:46 crc kubenswrapper[4731]: I1125 14:57:46.000637 4731 scope.go:117] "RemoveContainer" containerID="7fbaa541a2b1b9c407dac423bcbfa6613225f8b496e10148df0a1b7f52988927" Nov 25 14:57:46 crc kubenswrapper[4731]: I1125 14:57:46.000186 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vp8cm" Nov 25 14:57:46 crc kubenswrapper[4731]: I1125 14:57:46.579055 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1b339ba-eeea-496d-a2b4-e5b96f4f2380" (UID: "c1b339ba-eeea-496d-a2b4-e5b96f4f2380"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:57:46 crc kubenswrapper[4731]: I1125 14:57:46.636700 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vp8cm"] Nov 25 14:57:46 crc kubenswrapper[4731]: I1125 14:57:46.639993 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vp8cm"] Nov 25 14:57:46 crc kubenswrapper[4731]: I1125 14:57:46.677661 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1b339ba-eeea-496d-a2b4-e5b96f4f2380-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:46 crc kubenswrapper[4731]: I1125 14:57:46.823320 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1b339ba-eeea-496d-a2b4-e5b96f4f2380" path="/var/lib/kubelet/pods/c1b339ba-eeea-496d-a2b4-e5b96f4f2380/volumes" Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.012313 4731 generic.go:334] "Generic (PLEG): container finished" podID="a55f5a5e-8ee3-461e-81b3-78c56a873782" containerID="92423d3b592d4da2b96bd79c583809c9b465756240a644c4c6a811629b25a10e" exitCode=0 Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.012373 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfskb" event={"ID":"a55f5a5e-8ee3-461e-81b3-78c56a873782","Type":"ContainerDied","Data":"92423d3b592d4da2b96bd79c583809c9b465756240a644c4c6a811629b25a10e"} Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.523316 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.674587 4731 scope.go:117] "RemoveContainer" containerID="e0961282890cbc7969db242e1ae7092e22fef60bad6bb40a08d647246def1011" Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.690737 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/525c51bb-0303-4b1b-aeb9-593fb77df6af-utilities\") pod \"525c51bb-0303-4b1b-aeb9-593fb77df6af\" (UID: \"525c51bb-0303-4b1b-aeb9-593fb77df6af\") " Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.690956 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/525c51bb-0303-4b1b-aeb9-593fb77df6af-catalog-content\") pod \"525c51bb-0303-4b1b-aeb9-593fb77df6af\" (UID: \"525c51bb-0303-4b1b-aeb9-593fb77df6af\") " Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.691074 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9lvs\" (UniqueName: \"kubernetes.io/projected/525c51bb-0303-4b1b-aeb9-593fb77df6af-kube-api-access-m9lvs\") pod \"525c51bb-0303-4b1b-aeb9-593fb77df6af\" (UID: \"525c51bb-0303-4b1b-aeb9-593fb77df6af\") " Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.691833 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/525c51bb-0303-4b1b-aeb9-593fb77df6af-utilities" (OuterVolumeSpecName: "utilities") pod "525c51bb-0303-4b1b-aeb9-593fb77df6af" (UID: "525c51bb-0303-4b1b-aeb9-593fb77df6af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.698436 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/525c51bb-0303-4b1b-aeb9-593fb77df6af-kube-api-access-m9lvs" (OuterVolumeSpecName: "kube-api-access-m9lvs") pod "525c51bb-0303-4b1b-aeb9-593fb77df6af" (UID: "525c51bb-0303-4b1b-aeb9-593fb77df6af"). InnerVolumeSpecName "kube-api-access-m9lvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.705393 4731 scope.go:117] "RemoveContainer" containerID="c3bfd7f757e88bb9eaa0346eea5b6c57c346bfd3ee4bb473340a4e9a03a1f704" Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.747246 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/525c51bb-0303-4b1b-aeb9-593fb77df6af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "525c51bb-0303-4b1b-aeb9-593fb77df6af" (UID: "525c51bb-0303-4b1b-aeb9-593fb77df6af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.792920 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/525c51bb-0303-4b1b-aeb9-593fb77df6af-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.792961 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9lvs\" (UniqueName: \"kubernetes.io/projected/525c51bb-0303-4b1b-aeb9-593fb77df6af-kube-api-access-m9lvs\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.792972 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/525c51bb-0303-4b1b-aeb9-593fb77df6af-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.817052 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.995877 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55f5a5e-8ee3-461e-81b3-78c56a873782-catalog-content\") pod \"a55f5a5e-8ee3-461e-81b3-78c56a873782\" (UID: \"a55f5a5e-8ee3-461e-81b3-78c56a873782\") " Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.995999 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrffz\" (UniqueName: \"kubernetes.io/projected/a55f5a5e-8ee3-461e-81b3-78c56a873782-kube-api-access-jrffz\") pod \"a55f5a5e-8ee3-461e-81b3-78c56a873782\" (UID: \"a55f5a5e-8ee3-461e-81b3-78c56a873782\") " Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.996118 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55f5a5e-8ee3-461e-81b3-78c56a873782-utilities\") pod \"a55f5a5e-8ee3-461e-81b3-78c56a873782\" (UID: \"a55f5a5e-8ee3-461e-81b3-78c56a873782\") " Nov 25 14:57:47 crc kubenswrapper[4731]: I1125 14:57:47.997624 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a55f5a5e-8ee3-461e-81b3-78c56a873782-utilities" (OuterVolumeSpecName: "utilities") pod "a55f5a5e-8ee3-461e-81b3-78c56a873782" (UID: "a55f5a5e-8ee3-461e-81b3-78c56a873782"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:47.999899 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a55f5a5e-8ee3-461e-81b3-78c56a873782-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.000125 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a55f5a5e-8ee3-461e-81b3-78c56a873782-kube-api-access-jrffz" (OuterVolumeSpecName: "kube-api-access-jrffz") pod "a55f5a5e-8ee3-461e-81b3-78c56a873782" (UID: "a55f5a5e-8ee3-461e-81b3-78c56a873782"). InnerVolumeSpecName "kube-api-access-jrffz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.014486 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a55f5a5e-8ee3-461e-81b3-78c56a873782-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a55f5a5e-8ee3-461e-81b3-78c56a873782" (UID: "a55f5a5e-8ee3-461e-81b3-78c56a873782"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.023971 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cr25s" event={"ID":"525c51bb-0303-4b1b-aeb9-593fb77df6af","Type":"ContainerDied","Data":"e0215c87bac03b97c83658f93931bdbae4a234772cf28b22cfa5fc2b431b909b"} Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.024015 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cr25s" Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.024059 4731 scope.go:117] "RemoveContainer" containerID="dbdec462140c3000db8c7f67713f3e8d57d6b4cc175e0ce7504a9cdc9b003551" Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.026393 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfskb" event={"ID":"a55f5a5e-8ee3-461e-81b3-78c56a873782","Type":"ContainerDied","Data":"adfae1c3f83b4dab762d17221943a8906c7e76f6a209ad12b42f4e4d834141cc"} Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.026523 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfskb" Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.045065 4731 scope.go:117] "RemoveContainer" containerID="284e5a88a6e46d2446de49dfce7cce7f990f4f243ea43f5ec04330753cd687d4" Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.059457 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cr25s"] Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.064575 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cr25s"] Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.068532 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfskb"] Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.071167 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfskb"] Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.076316 4731 scope.go:117] "RemoveContainer" containerID="853ec981ed68284335ed6ec37fa46086210dccc7ef8f7c0ed357b577e496b0b4" Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.091878 4731 scope.go:117] "RemoveContainer" containerID="92423d3b592d4da2b96bd79c583809c9b465756240a644c4c6a811629b25a10e" Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.101505 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrffz\" (UniqueName: \"kubernetes.io/projected/a55f5a5e-8ee3-461e-81b3-78c56a873782-kube-api-access-jrffz\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.101546 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a55f5a5e-8ee3-461e-81b3-78c56a873782-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.106615 4731 scope.go:117] "RemoveContainer" containerID="5dc074c6284fa54a06148fdfeb779da8ac829fe9fb4e01273281afb4734b7921" Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.125288 4731 scope.go:117] "RemoveContainer" containerID="a806bc7c78310bf6100510d6603cec924a90dd13e809e96d484f6436ef2c3db2" Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.822683 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="525c51bb-0303-4b1b-aeb9-593fb77df6af" path="/var/lib/kubelet/pods/525c51bb-0303-4b1b-aeb9-593fb77df6af/volumes" Nov 25 14:57:48 crc kubenswrapper[4731]: I1125 14:57:48.824619 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a55f5a5e-8ee3-461e-81b3-78c56a873782" path="/var/lib/kubelet/pods/a55f5a5e-8ee3-461e-81b3-78c56a873782/volumes" Nov 25 14:57:49 crc kubenswrapper[4731]: I1125 14:57:49.037369 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-md9zz" event={"ID":"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca","Type":"ContainerStarted","Data":"ee63d3d6aabbe7d873beb1badc3f4bf160a0acdcb416e315a3c37fe618db38b1"} Nov 25 14:57:49 crc kubenswrapper[4731]: I1125 14:57:49.064135 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-md9zz" podStartSLOduration=3.762315553 podStartE2EDuration="1m18.06409923s" podCreationTimestamp="2025-11-25 14:56:31 +0000 UTC" firstStartedPulling="2025-11-25 14:56:33.37306339 +0000 UTC m=+152.120197830" lastFinishedPulling="2025-11-25 14:57:47.674847077 +0000 UTC m=+226.421981507" observedRunningTime="2025-11-25 14:57:49.060208255 +0000 UTC m=+227.807342695" watchObservedRunningTime="2025-11-25 14:57:49.06409923 +0000 UTC m=+227.811233670" Nov 25 14:57:50 crc kubenswrapper[4731]: I1125 14:57:50.040740 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 14:57:50 crc kubenswrapper[4731]: I1125 14:57:50.040855 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 14:57:50 crc kubenswrapper[4731]: I1125 14:57:50.040936 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 14:57:50 crc kubenswrapper[4731]: I1125 14:57:50.041622 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 14:57:50 crc kubenswrapper[4731]: I1125 14:57:50.041684 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b" gracePeriod=600 Nov 25 14:57:51 crc kubenswrapper[4731]: I1125 14:57:51.044590 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fp5cw"] Nov 25 14:57:51 crc kubenswrapper[4731]: I1125 14:57:51.069185 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b" exitCode=0 Nov 25 14:57:51 crc kubenswrapper[4731]: I1125 14:57:51.069254 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b"} Nov 25 14:57:51 crc kubenswrapper[4731]: I1125 14:57:51.898927 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:57:51 crc kubenswrapper[4731]: I1125 14:57:51.899434 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:57:52 crc kubenswrapper[4731]: I1125 14:57:52.086727 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"084e1849e03ff9c03a065300886ce95d4f2387a6f08620af335a641d5c33dcf2"} Nov 25 14:57:52 crc kubenswrapper[4731]: I1125 14:57:52.098175 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9p29" event={"ID":"b750a03d-5b79-4d0c-b569-674089d31556","Type":"ContainerStarted","Data":"55a966fe9c4c77ec432255dd0972b9d27f40ed11f2327f836be5b47fd47ea00a"} Nov 25 14:57:52 crc kubenswrapper[4731]: I1125 14:57:52.135138 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q9p29" podStartSLOduration=4.779995623 podStartE2EDuration="1m24.135109158s" podCreationTimestamp="2025-11-25 14:56:28 +0000 UTC" firstStartedPulling="2025-11-25 14:56:31.152090902 +0000 UTC m=+149.899225332" lastFinishedPulling="2025-11-25 14:57:50.507204437 +0000 UTC m=+229.254338867" observedRunningTime="2025-11-25 14:57:52.130337569 +0000 UTC m=+230.877471999" watchObservedRunningTime="2025-11-25 14:57:52.135109158 +0000 UTC m=+230.882243588" Nov 25 14:57:52 crc kubenswrapper[4731]: I1125 14:57:52.429466 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:57:52 crc kubenswrapper[4731]: I1125 14:57:52.470793 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:57:52 crc kubenswrapper[4731]: I1125 14:57:52.953739 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-md9zz" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" containerName="registry-server" probeResult="failure" output=< Nov 25 14:57:52 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 14:57:52 crc kubenswrapper[4731]: > Nov 25 14:57:53 crc kubenswrapper[4731]: I1125 14:57:53.002637 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5vj2x"] Nov 25 14:57:54 crc kubenswrapper[4731]: I1125 14:57:54.126447 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5vj2x" podUID="161c00b9-b4f1-4c6b-9ee9-f4b056415601" containerName="registry-server" containerID="cri-o://eebe78c9e2530ea35c102c4825695b6a841f907e36877f533cf0b2d7d387157f" gracePeriod=2 Nov 25 14:57:54 crc kubenswrapper[4731]: I1125 14:57:54.505604 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:57:54 crc kubenswrapper[4731]: I1125 14:57:54.509569 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f87rs\" (UniqueName: \"kubernetes.io/projected/161c00b9-b4f1-4c6b-9ee9-f4b056415601-kube-api-access-f87rs\") pod \"161c00b9-b4f1-4c6b-9ee9-f4b056415601\" (UID: \"161c00b9-b4f1-4c6b-9ee9-f4b056415601\") " Nov 25 14:57:54 crc kubenswrapper[4731]: I1125 14:57:54.509652 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/161c00b9-b4f1-4c6b-9ee9-f4b056415601-utilities\") pod \"161c00b9-b4f1-4c6b-9ee9-f4b056415601\" (UID: \"161c00b9-b4f1-4c6b-9ee9-f4b056415601\") " Nov 25 14:57:54 crc kubenswrapper[4731]: I1125 14:57:54.509722 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/161c00b9-b4f1-4c6b-9ee9-f4b056415601-catalog-content\") pod \"161c00b9-b4f1-4c6b-9ee9-f4b056415601\" (UID: \"161c00b9-b4f1-4c6b-9ee9-f4b056415601\") " Nov 25 14:57:54 crc kubenswrapper[4731]: I1125 14:57:54.511282 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/161c00b9-b4f1-4c6b-9ee9-f4b056415601-utilities" (OuterVolumeSpecName: "utilities") pod "161c00b9-b4f1-4c6b-9ee9-f4b056415601" (UID: "161c00b9-b4f1-4c6b-9ee9-f4b056415601"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:57:54 crc kubenswrapper[4731]: I1125 14:57:54.522000 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/161c00b9-b4f1-4c6b-9ee9-f4b056415601-kube-api-access-f87rs" (OuterVolumeSpecName: "kube-api-access-f87rs") pod "161c00b9-b4f1-4c6b-9ee9-f4b056415601" (UID: "161c00b9-b4f1-4c6b-9ee9-f4b056415601"). InnerVolumeSpecName "kube-api-access-f87rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:57:54 crc kubenswrapper[4731]: I1125 14:57:54.605699 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/161c00b9-b4f1-4c6b-9ee9-f4b056415601-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "161c00b9-b4f1-4c6b-9ee9-f4b056415601" (UID: "161c00b9-b4f1-4c6b-9ee9-f4b056415601"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:57:54 crc kubenswrapper[4731]: I1125 14:57:54.611685 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/161c00b9-b4f1-4c6b-9ee9-f4b056415601-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:54 crc kubenswrapper[4731]: I1125 14:57:54.611716 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/161c00b9-b4f1-4c6b-9ee9-f4b056415601-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:54 crc kubenswrapper[4731]: I1125 14:57:54.611737 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f87rs\" (UniqueName: \"kubernetes.io/projected/161c00b9-b4f1-4c6b-9ee9-f4b056415601-kube-api-access-f87rs\") on node \"crc\" DevicePath \"\"" Nov 25 14:57:55 crc kubenswrapper[4731]: I1125 14:57:55.133726 4731 generic.go:334] "Generic (PLEG): container finished" podID="161c00b9-b4f1-4c6b-9ee9-f4b056415601" containerID="eebe78c9e2530ea35c102c4825695b6a841f907e36877f533cf0b2d7d387157f" exitCode=0 Nov 25 14:57:55 crc kubenswrapper[4731]: I1125 14:57:55.133774 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vj2x" event={"ID":"161c00b9-b4f1-4c6b-9ee9-f4b056415601","Type":"ContainerDied","Data":"eebe78c9e2530ea35c102c4825695b6a841f907e36877f533cf0b2d7d387157f"} Nov 25 14:57:55 crc kubenswrapper[4731]: I1125 14:57:55.133808 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5vj2x" event={"ID":"161c00b9-b4f1-4c6b-9ee9-f4b056415601","Type":"ContainerDied","Data":"c3816ea4370e0e376870cf75e6f32f4344954e5628c3faadda39a14522777b16"} Nov 25 14:57:55 crc kubenswrapper[4731]: I1125 14:57:55.133850 4731 scope.go:117] "RemoveContainer" containerID="eebe78c9e2530ea35c102c4825695b6a841f907e36877f533cf0b2d7d387157f" Nov 25 14:57:55 crc kubenswrapper[4731]: I1125 14:57:55.133900 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5vj2x" Nov 25 14:57:55 crc kubenswrapper[4731]: I1125 14:57:55.159132 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5vj2x"] Nov 25 14:57:55 crc kubenswrapper[4731]: I1125 14:57:55.160220 4731 scope.go:117] "RemoveContainer" containerID="c9cb6f57e68d763e375b8bf9673939d6e419d1ab1ab2c150a4c0d7ea5a511a0e" Nov 25 14:57:55 crc kubenswrapper[4731]: I1125 14:57:55.163136 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5vj2x"] Nov 25 14:57:55 crc kubenswrapper[4731]: I1125 14:57:55.182121 4731 scope.go:117] "RemoveContainer" containerID="f523a6e8cfa7e88debc25592a1b197740456bb83abdda1d451eb17af9381b8fc" Nov 25 14:57:55 crc kubenswrapper[4731]: I1125 14:57:55.196683 4731 scope.go:117] "RemoveContainer" containerID="eebe78c9e2530ea35c102c4825695b6a841f907e36877f533cf0b2d7d387157f" Nov 25 14:57:55 crc kubenswrapper[4731]: E1125 14:57:55.197229 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eebe78c9e2530ea35c102c4825695b6a841f907e36877f533cf0b2d7d387157f\": container with ID starting with eebe78c9e2530ea35c102c4825695b6a841f907e36877f533cf0b2d7d387157f not found: ID does not exist" containerID="eebe78c9e2530ea35c102c4825695b6a841f907e36877f533cf0b2d7d387157f" Nov 25 14:57:55 crc kubenswrapper[4731]: I1125 14:57:55.197281 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eebe78c9e2530ea35c102c4825695b6a841f907e36877f533cf0b2d7d387157f"} err="failed to get container status \"eebe78c9e2530ea35c102c4825695b6a841f907e36877f533cf0b2d7d387157f\": rpc error: code = NotFound desc = could not find container \"eebe78c9e2530ea35c102c4825695b6a841f907e36877f533cf0b2d7d387157f\": container with ID starting with eebe78c9e2530ea35c102c4825695b6a841f907e36877f533cf0b2d7d387157f not found: ID does not exist" Nov 25 14:57:55 crc kubenswrapper[4731]: I1125 14:57:55.197320 4731 scope.go:117] "RemoveContainer" containerID="c9cb6f57e68d763e375b8bf9673939d6e419d1ab1ab2c150a4c0d7ea5a511a0e" Nov 25 14:57:55 crc kubenswrapper[4731]: E1125 14:57:55.197714 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9cb6f57e68d763e375b8bf9673939d6e419d1ab1ab2c150a4c0d7ea5a511a0e\": container with ID starting with c9cb6f57e68d763e375b8bf9673939d6e419d1ab1ab2c150a4c0d7ea5a511a0e not found: ID does not exist" containerID="c9cb6f57e68d763e375b8bf9673939d6e419d1ab1ab2c150a4c0d7ea5a511a0e" Nov 25 14:57:55 crc kubenswrapper[4731]: I1125 14:57:55.197756 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9cb6f57e68d763e375b8bf9673939d6e419d1ab1ab2c150a4c0d7ea5a511a0e"} err="failed to get container status \"c9cb6f57e68d763e375b8bf9673939d6e419d1ab1ab2c150a4c0d7ea5a511a0e\": rpc error: code = NotFound desc = could not find container \"c9cb6f57e68d763e375b8bf9673939d6e419d1ab1ab2c150a4c0d7ea5a511a0e\": container with ID starting with c9cb6f57e68d763e375b8bf9673939d6e419d1ab1ab2c150a4c0d7ea5a511a0e not found: ID does not exist" Nov 25 14:57:55 crc kubenswrapper[4731]: I1125 14:57:55.197782 4731 scope.go:117] "RemoveContainer" containerID="f523a6e8cfa7e88debc25592a1b197740456bb83abdda1d451eb17af9381b8fc" Nov 25 14:57:55 crc kubenswrapper[4731]: E1125 14:57:55.198385 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f523a6e8cfa7e88debc25592a1b197740456bb83abdda1d451eb17af9381b8fc\": container with ID starting with f523a6e8cfa7e88debc25592a1b197740456bb83abdda1d451eb17af9381b8fc not found: ID does not exist" containerID="f523a6e8cfa7e88debc25592a1b197740456bb83abdda1d451eb17af9381b8fc" Nov 25 14:57:55 crc kubenswrapper[4731]: I1125 14:57:55.198433 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f523a6e8cfa7e88debc25592a1b197740456bb83abdda1d451eb17af9381b8fc"} err="failed to get container status \"f523a6e8cfa7e88debc25592a1b197740456bb83abdda1d451eb17af9381b8fc\": rpc error: code = NotFound desc = could not find container \"f523a6e8cfa7e88debc25592a1b197740456bb83abdda1d451eb17af9381b8fc\": container with ID starting with f523a6e8cfa7e88debc25592a1b197740456bb83abdda1d451eb17af9381b8fc not found: ID does not exist" Nov 25 14:57:56 crc kubenswrapper[4731]: I1125 14:57:56.820765 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="161c00b9-b4f1-4c6b-9ee9-f4b056415601" path="/var/lib/kubelet/pods/161c00b9-b4f1-4c6b-9ee9-f4b056415601/volumes" Nov 25 14:57:58 crc kubenswrapper[4731]: I1125 14:57:58.912119 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:57:58 crc kubenswrapper[4731]: I1125 14:57:58.912521 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:57:58 crc kubenswrapper[4731]: I1125 14:57:58.964249 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:57:59 crc kubenswrapper[4731]: I1125 14:57:59.205241 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.360787 4731 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.361171 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161c00b9-b4f1-4c6b-9ee9-f4b056415601" containerName="extract-utilities" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361190 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="161c00b9-b4f1-4c6b-9ee9-f4b056415601" containerName="extract-utilities" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.361211 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55f5a5e-8ee3-461e-81b3-78c56a873782" containerName="extract-utilities" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361223 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55f5a5e-8ee3-461e-81b3-78c56a873782" containerName="extract-utilities" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.361234 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525c51bb-0303-4b1b-aeb9-593fb77df6af" containerName="registry-server" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361245 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="525c51bb-0303-4b1b-aeb9-593fb77df6af" containerName="registry-server" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.361259 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161c00b9-b4f1-4c6b-9ee9-f4b056415601" containerName="extract-content" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361268 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="161c00b9-b4f1-4c6b-9ee9-f4b056415601" containerName="extract-content" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.361282 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55f5a5e-8ee3-461e-81b3-78c56a873782" containerName="extract-content" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361290 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55f5a5e-8ee3-461e-81b3-78c56a873782" containerName="extract-content" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.361302 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b339ba-eeea-496d-a2b4-e5b96f4f2380" containerName="extract-content" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361310 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b339ba-eeea-496d-a2b4-e5b96f4f2380" containerName="extract-content" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.361322 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525c51bb-0303-4b1b-aeb9-593fb77df6af" containerName="extract-content" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361329 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="525c51bb-0303-4b1b-aeb9-593fb77df6af" containerName="extract-content" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.361341 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161c00b9-b4f1-4c6b-9ee9-f4b056415601" containerName="registry-server" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361350 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="161c00b9-b4f1-4c6b-9ee9-f4b056415601" containerName="registry-server" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.361370 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a55f5a5e-8ee3-461e-81b3-78c56a873782" containerName="registry-server" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361380 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a55f5a5e-8ee3-461e-81b3-78c56a873782" containerName="registry-server" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.361391 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b339ba-eeea-496d-a2b4-e5b96f4f2380" containerName="registry-server" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361399 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b339ba-eeea-496d-a2b4-e5b96f4f2380" containerName="registry-server" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.361411 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1b339ba-eeea-496d-a2b4-e5b96f4f2380" containerName="extract-utilities" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361420 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1b339ba-eeea-496d-a2b4-e5b96f4f2380" containerName="extract-utilities" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.361428 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00171e21-66c4-4fae-a582-add16a773996" containerName="pruner" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361437 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="00171e21-66c4-4fae-a582-add16a773996" containerName="pruner" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.361447 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525c51bb-0303-4b1b-aeb9-593fb77df6af" containerName="extract-utilities" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361455 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="525c51bb-0303-4b1b-aeb9-593fb77df6af" containerName="extract-utilities" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361661 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1b339ba-eeea-496d-a2b4-e5b96f4f2380" containerName="registry-server" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361711 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a55f5a5e-8ee3-461e-81b3-78c56a873782" containerName="registry-server" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361724 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="161c00b9-b4f1-4c6b-9ee9-f4b056415601" containerName="registry-server" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361738 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="525c51bb-0303-4b1b-aeb9-593fb77df6af" containerName="registry-server" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.361751 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="00171e21-66c4-4fae-a582-add16a773996" containerName="pruner" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.362364 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.362991 4731 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.363420 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75" gracePeriod=15 Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.363568 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874" gracePeriod=15 Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.363468 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5" gracePeriod=15 Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.363492 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb" gracePeriod=15 Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.363430 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e" gracePeriod=15 Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.364317 4731 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.364459 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.364473 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.364481 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.364487 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.364495 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.364501 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.364512 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.364518 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.364530 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.364536 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.364544 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.364549 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.364562 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.364569 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.364670 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.364679 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.364687 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.364694 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.364701 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.364737 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.364746 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.364868 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.364875 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.495552 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.496006 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.496035 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.496062 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.496088 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.496119 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.496135 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.496155 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.597274 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.597734 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.597874 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.597690 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.598106 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.598252 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.598409 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.598528 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.598642 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.598365 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.598139 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.598500 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.598166 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.598614 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.598108 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: I1125 14:58:00.598746 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:00 crc kubenswrapper[4731]: E1125 14:58:00.902320 4731 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" volumeName="registry-storage" Nov 25 14:58:01 crc kubenswrapper[4731]: E1125 14:58:01.099891 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:58:01Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:58:01Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:58:01Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:58:01Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:01 crc kubenswrapper[4731]: E1125 14:58:01.100616 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:01 crc kubenswrapper[4731]: E1125 14:58:01.101447 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:01 crc kubenswrapper[4731]: E1125 14:58:01.101904 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:01 crc kubenswrapper[4731]: E1125 14:58:01.102411 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:01 crc kubenswrapper[4731]: E1125 14:58:01.102439 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:58:01 crc kubenswrapper[4731]: I1125 14:58:01.176923 4731 generic.go:334] "Generic (PLEG): container finished" podID="0d487e2a-09a0-47ff-90eb-0ed913b60ba0" containerID="52b0f5acc4d9e1159b87d5f404bd458e2de8c1096651a2c67ec02402dda7973d" exitCode=0 Nov 25 14:58:01 crc kubenswrapper[4731]: I1125 14:58:01.176984 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0d487e2a-09a0-47ff-90eb-0ed913b60ba0","Type":"ContainerDied","Data":"52b0f5acc4d9e1159b87d5f404bd458e2de8c1096651a2c67ec02402dda7973d"} Nov 25 14:58:01 crc kubenswrapper[4731]: I1125 14:58:01.178850 4731 status_manager.go:851] "Failed to get status for pod" podUID="0d487e2a-09a0-47ff-90eb-0ed913b60ba0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:01 crc kubenswrapper[4731]: I1125 14:58:01.194719 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Nov 25 14:58:01 crc kubenswrapper[4731]: I1125 14:58:01.198901 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 14:58:01 crc kubenswrapper[4731]: I1125 14:58:01.200422 4731 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874" exitCode=0 Nov 25 14:58:01 crc kubenswrapper[4731]: I1125 14:58:01.200641 4731 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e" exitCode=0 Nov 25 14:58:01 crc kubenswrapper[4731]: I1125 14:58:01.200728 4731 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb" exitCode=0 Nov 25 14:58:01 crc kubenswrapper[4731]: I1125 14:58:01.200878 4731 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5" exitCode=2 Nov 25 14:58:01 crc kubenswrapper[4731]: I1125 14:58:01.200614 4731 scope.go:117] "RemoveContainer" containerID="6c0d3d3025d066bbcf9bf43a0e635f070525ef499cfa6126fd74bfe5fbf60efe" Nov 25 14:58:01 crc kubenswrapper[4731]: I1125 14:58:01.959147 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:58:01 crc kubenswrapper[4731]: I1125 14:58:01.960405 4731 status_manager.go:851] "Failed to get status for pod" podUID="0d487e2a-09a0-47ff-90eb-0ed913b60ba0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:01 crc kubenswrapper[4731]: I1125 14:58:01.961233 4731 status_manager.go:851] "Failed to get status for pod" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" pod="openshift-marketplace/redhat-operators-md9zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-md9zz\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.018626 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.019250 4731 status_manager.go:851] "Failed to get status for pod" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" pod="openshift-marketplace/redhat-operators-md9zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-md9zz\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.019590 4731 status_manager.go:851] "Failed to get status for pod" podUID="0d487e2a-09a0-47ff-90eb-0ed913b60ba0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.212130 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.534131 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.535326 4731 status_manager.go:851] "Failed to get status for pod" podUID="0d487e2a-09a0-47ff-90eb-0ed913b60ba0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.536073 4731 status_manager.go:851] "Failed to get status for pod" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" pod="openshift-marketplace/redhat-operators-md9zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-md9zz\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.733449 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-kube-api-access\") pod \"0d487e2a-09a0-47ff-90eb-0ed913b60ba0\" (UID: \"0d487e2a-09a0-47ff-90eb-0ed913b60ba0\") " Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.733704 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-var-lock\") pod \"0d487e2a-09a0-47ff-90eb-0ed913b60ba0\" (UID: \"0d487e2a-09a0-47ff-90eb-0ed913b60ba0\") " Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.733757 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-kubelet-dir\") pod \"0d487e2a-09a0-47ff-90eb-0ed913b60ba0\" (UID: \"0d487e2a-09a0-47ff-90eb-0ed913b60ba0\") " Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.733928 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-var-lock" (OuterVolumeSpecName: "var-lock") pod "0d487e2a-09a0-47ff-90eb-0ed913b60ba0" (UID: "0d487e2a-09a0-47ff-90eb-0ed913b60ba0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.733919 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0d487e2a-09a0-47ff-90eb-0ed913b60ba0" (UID: "0d487e2a-09a0-47ff-90eb-0ed913b60ba0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.734255 4731 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.734280 4731 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-kubelet-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.741598 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0d487e2a-09a0-47ff-90eb-0ed913b60ba0" (UID: "0d487e2a-09a0-47ff-90eb-0ed913b60ba0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.817081 4731 status_manager.go:851] "Failed to get status for pod" podUID="0d487e2a-09a0-47ff-90eb-0ed913b60ba0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.817724 4731 status_manager.go:851] "Failed to get status for pod" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" pod="openshift-marketplace/redhat-operators-md9zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-md9zz\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.835345 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d487e2a-09a0-47ff-90eb-0ed913b60ba0-kube-api-access\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.840853 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.842340 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.843058 4731 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.843626 4731 status_manager.go:851] "Failed to get status for pod" podUID="0d487e2a-09a0-47ff-90eb-0ed913b60ba0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:02 crc kubenswrapper[4731]: I1125 14:58:02.844236 4731 status_manager.go:851] "Failed to get status for pod" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" pod="openshift-marketplace/redhat-operators-md9zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-md9zz\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.037851 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.037982 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.038006 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.038069 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.038235 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.038387 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.038712 4731 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.038733 4731 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.038741 4731 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.222439 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.223239 4731 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75" exitCode=0 Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.223339 4731 scope.go:117] "RemoveContainer" containerID="8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.223348 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.226830 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"0d487e2a-09a0-47ff-90eb-0ed913b60ba0","Type":"ContainerDied","Data":"79920496e769220c79c9dd673eeeb55b99ac38de60a914004cac61b945cdaef6"} Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.226868 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79920496e769220c79c9dd673eeeb55b99ac38de60a914004cac61b945cdaef6" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.226875 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.230937 4731 status_manager.go:851] "Failed to get status for pod" podUID="0d487e2a-09a0-47ff-90eb-0ed913b60ba0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.231196 4731 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.231551 4731 status_manager.go:851] "Failed to get status for pod" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" pod="openshift-marketplace/redhat-operators-md9zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-md9zz\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.237321 4731 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.237677 4731 status_manager.go:851] "Failed to get status for pod" podUID="0d487e2a-09a0-47ff-90eb-0ed913b60ba0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.237923 4731 status_manager.go:851] "Failed to get status for pod" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" pod="openshift-marketplace/redhat-operators-md9zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-md9zz\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.239887 4731 scope.go:117] "RemoveContainer" containerID="bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.250561 4731 scope.go:117] "RemoveContainer" containerID="cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.262621 4731 scope.go:117] "RemoveContainer" containerID="47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.275967 4731 scope.go:117] "RemoveContainer" containerID="ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.303239 4731 scope.go:117] "RemoveContainer" containerID="b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.325755 4731 scope.go:117] "RemoveContainer" containerID="8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874" Nov 25 14:58:03 crc kubenswrapper[4731]: E1125 14:58:03.326478 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\": container with ID starting with 8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874 not found: ID does not exist" containerID="8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.326524 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874"} err="failed to get container status \"8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\": rpc error: code = NotFound desc = could not find container \"8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874\": container with ID starting with 8d0de61e24f57a0458576bb4838b1139f9445fa4f93766ba2e5a89ddf7fdc874 not found: ID does not exist" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.326553 4731 scope.go:117] "RemoveContainer" containerID="bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e" Nov 25 14:58:03 crc kubenswrapper[4731]: E1125 14:58:03.326974 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\": container with ID starting with bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e not found: ID does not exist" containerID="bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.327025 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e"} err="failed to get container status \"bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\": rpc error: code = NotFound desc = could not find container \"bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e\": container with ID starting with bfc70b53b8ce88ce457753ff244dc9d516ce5882d86494a4084594a9f3f0560e not found: ID does not exist" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.327063 4731 scope.go:117] "RemoveContainer" containerID="cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb" Nov 25 14:58:03 crc kubenswrapper[4731]: E1125 14:58:03.328282 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\": container with ID starting with cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb not found: ID does not exist" containerID="cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.328319 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb"} err="failed to get container status \"cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\": rpc error: code = NotFound desc = could not find container \"cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb\": container with ID starting with cae1105456db4476b3a79b386e08a9db4cfc735a070913de8b5a60b1f9d4a3bb not found: ID does not exist" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.328337 4731 scope.go:117] "RemoveContainer" containerID="47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5" Nov 25 14:58:03 crc kubenswrapper[4731]: E1125 14:58:03.328775 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\": container with ID starting with 47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5 not found: ID does not exist" containerID="47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.328807 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5"} err="failed to get container status \"47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\": rpc error: code = NotFound desc = could not find container \"47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5\": container with ID starting with 47501cf321e4451908929131bb94cc9301e1f94b4a15a1e21de5237b9cfca2d5 not found: ID does not exist" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.328834 4731 scope.go:117] "RemoveContainer" containerID="ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75" Nov 25 14:58:03 crc kubenswrapper[4731]: E1125 14:58:03.329191 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\": container with ID starting with ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75 not found: ID does not exist" containerID="ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.329775 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75"} err="failed to get container status \"ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\": rpc error: code = NotFound desc = could not find container \"ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75\": container with ID starting with ef405db9bb32738a51217e89b1d3e6c6c6d42c58e820e49d243c36e4aa181d75 not found: ID does not exist" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.329855 4731 scope.go:117] "RemoveContainer" containerID="b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7" Nov 25 14:58:03 crc kubenswrapper[4731]: E1125 14:58:03.331541 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\": container with ID starting with b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7 not found: ID does not exist" containerID="b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7" Nov 25 14:58:03 crc kubenswrapper[4731]: I1125 14:58:03.331571 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7"} err="failed to get container status \"b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\": rpc error: code = NotFound desc = could not find container \"b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7\": container with ID starting with b8e245bf34db84331c124e3b999a550b8f819f926a9f1b35c70bde4caac289f7 not found: ID does not exist" Nov 25 14:58:04 crc kubenswrapper[4731]: I1125 14:58:04.819686 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Nov 25 14:58:05 crc kubenswrapper[4731]: E1125 14:58:05.408485 4731 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:05 crc kubenswrapper[4731]: I1125 14:58:05.409884 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:05 crc kubenswrapper[4731]: E1125 14:58:05.465686 4731 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b47db1d8df034 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 14:58:05.464932404 +0000 UTC m=+244.212066834,LastTimestamp:2025-11-25 14:58:05.464932404 +0000 UTC m=+244.212066834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 14:58:05 crc kubenswrapper[4731]: I1125 14:58:05.696714 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"12f2a424a8f296753d7be06261f54fbfddf79d620e91d2babc0e97d7acf0291e"} Nov 25 14:58:06 crc kubenswrapper[4731]: I1125 14:58:06.704266 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"3ff470a63215bf89993c7bf4cdad5c6f767ed4715933095b9e32cb7d81507edc"} Nov 25 14:58:06 crc kubenswrapper[4731]: I1125 14:58:06.704976 4731 status_manager.go:851] "Failed to get status for pod" podUID="0d487e2a-09a0-47ff-90eb-0ed913b60ba0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:06 crc kubenswrapper[4731]: E1125 14:58:06.705078 4731 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:06 crc kubenswrapper[4731]: I1125 14:58:06.705316 4731 status_manager.go:851] "Failed to get status for pod" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" pod="openshift-marketplace/redhat-operators-md9zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-md9zz\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:07 crc kubenswrapper[4731]: E1125 14:58:07.709790 4731 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:08 crc kubenswrapper[4731]: E1125 14:58:08.474067 4731 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:08 crc kubenswrapper[4731]: E1125 14:58:08.474374 4731 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:08 crc kubenswrapper[4731]: E1125 14:58:08.474790 4731 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:08 crc kubenswrapper[4731]: E1125 14:58:08.475873 4731 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:08 crc kubenswrapper[4731]: E1125 14:58:08.476105 4731 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:08 crc kubenswrapper[4731]: I1125 14:58:08.476139 4731 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Nov 25 14:58:08 crc kubenswrapper[4731]: E1125 14:58:08.476369 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="200ms" Nov 25 14:58:08 crc kubenswrapper[4731]: E1125 14:58:08.677440 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="400ms" Nov 25 14:58:09 crc kubenswrapper[4731]: E1125 14:58:09.078472 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="800ms" Nov 25 14:58:09 crc kubenswrapper[4731]: E1125 14:58:09.880291 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="1.6s" Nov 25 14:58:11 crc kubenswrapper[4731]: E1125 14:58:11.303746 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:58:11Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:58:11Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:58:11Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-11-25T14:58:11Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:11 crc kubenswrapper[4731]: E1125 14:58:11.304316 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:11 crc kubenswrapper[4731]: E1125 14:58:11.304723 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:11 crc kubenswrapper[4731]: E1125 14:58:11.305087 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:11 crc kubenswrapper[4731]: E1125 14:58:11.305405 4731 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:11 crc kubenswrapper[4731]: E1125 14:58:11.305424 4731 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Nov 25 14:58:11 crc kubenswrapper[4731]: E1125 14:58:11.428965 4731 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.80:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187b47db1d8df034 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-11-25 14:58:05.464932404 +0000 UTC m=+244.212066834,LastTimestamp:2025-11-25 14:58:05.464932404 +0000 UTC m=+244.212066834,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Nov 25 14:58:11 crc kubenswrapper[4731]: E1125 14:58:11.482008 4731 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.80:6443: connect: connection refused" interval="3.2s" Nov 25 14:58:11 crc kubenswrapper[4731]: I1125 14:58:11.811421 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:11 crc kubenswrapper[4731]: I1125 14:58:11.812363 4731 status_manager.go:851] "Failed to get status for pod" podUID="0d487e2a-09a0-47ff-90eb-0ed913b60ba0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:11 crc kubenswrapper[4731]: I1125 14:58:11.812770 4731 status_manager.go:851] "Failed to get status for pod" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" pod="openshift-marketplace/redhat-operators-md9zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-md9zz\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:11 crc kubenswrapper[4731]: I1125 14:58:11.829760 4731 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e8368a5d-5e6f-4425-8187-46cd17cca8fd" Nov 25 14:58:11 crc kubenswrapper[4731]: I1125 14:58:11.829802 4731 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e8368a5d-5e6f-4425-8187-46cd17cca8fd" Nov 25 14:58:11 crc kubenswrapper[4731]: E1125 14:58:11.830179 4731 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:11 crc kubenswrapper[4731]: I1125 14:58:11.830703 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:12 crc kubenswrapper[4731]: I1125 14:58:12.739617 4731 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="b3a362d59fee811e2d917d30d719e54d86ab96f4d9e5e203d2af1ecf249b4900" exitCode=0 Nov 25 14:58:12 crc kubenswrapper[4731]: I1125 14:58:12.739705 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"b3a362d59fee811e2d917d30d719e54d86ab96f4d9e5e203d2af1ecf249b4900"} Nov 25 14:58:12 crc kubenswrapper[4731]: I1125 14:58:12.740219 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f8fbc64cbdeb346efee9fcd869e0787889da33ba553ac401bb91031c1ad4d26b"} Nov 25 14:58:12 crc kubenswrapper[4731]: I1125 14:58:12.740547 4731 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e8368a5d-5e6f-4425-8187-46cd17cca8fd" Nov 25 14:58:12 crc kubenswrapper[4731]: I1125 14:58:12.740560 4731 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e8368a5d-5e6f-4425-8187-46cd17cca8fd" Nov 25 14:58:12 crc kubenswrapper[4731]: E1125 14:58:12.740930 4731 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:12 crc kubenswrapper[4731]: I1125 14:58:12.741189 4731 status_manager.go:851] "Failed to get status for pod" podUID="0d487e2a-09a0-47ff-90eb-0ed913b60ba0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:12 crc kubenswrapper[4731]: I1125 14:58:12.741430 4731 status_manager.go:851] "Failed to get status for pod" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" pod="openshift-marketplace/redhat-operators-md9zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-md9zz\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:12 crc kubenswrapper[4731]: I1125 14:58:12.816740 4731 status_manager.go:851] "Failed to get status for pod" podUID="0d487e2a-09a0-47ff-90eb-0ed913b60ba0" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:12 crc kubenswrapper[4731]: I1125 14:58:12.817062 4731 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:12 crc kubenswrapper[4731]: I1125 14:58:12.817437 4731 status_manager.go:851] "Failed to get status for pod" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" pod="openshift-marketplace/redhat-operators-md9zz" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-md9zz\": dial tcp 38.102.83.80:6443: connect: connection refused" Nov 25 14:58:13 crc kubenswrapper[4731]: I1125 14:58:13.749590 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6e2cd129046d2dc4614a950e0c3a301605f763c99d5c5dd6ee654d8896d31c38"} Nov 25 14:58:13 crc kubenswrapper[4731]: I1125 14:58:13.750184 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"643b0a561155830a07fd5e91dc0488c43b4ddc7517fd25059046177c63a9c526"} Nov 25 14:58:14 crc kubenswrapper[4731]: I1125 14:58:14.756660 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"131e1a06288322afaa1af05a7bd90c184515101de45225c598a79e97b5439a22"} Nov 25 14:58:14 crc kubenswrapper[4731]: I1125 14:58:14.757003 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d37b6c6936332af2d8af3e564f1f7729af1db56cce5add1c22e84d0a4224c437"} Nov 25 14:58:15 crc kubenswrapper[4731]: I1125 14:58:15.766445 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8e1dbc7433eacc0b80f82448f308a0895f3c32985828e314b1aa8248c9da9edd"} Nov 25 14:58:15 crc kubenswrapper[4731]: I1125 14:58:15.766671 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:15 crc kubenswrapper[4731]: I1125 14:58:15.766836 4731 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e8368a5d-5e6f-4425-8187-46cd17cca8fd" Nov 25 14:58:15 crc kubenswrapper[4731]: I1125 14:58:15.766870 4731 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e8368a5d-5e6f-4425-8187-46cd17cca8fd" Nov 25 14:58:15 crc kubenswrapper[4731]: I1125 14:58:15.769594 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 14:58:15 crc kubenswrapper[4731]: I1125 14:58:15.769647 4731 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970" exitCode=1 Nov 25 14:58:15 crc kubenswrapper[4731]: I1125 14:58:15.769678 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970"} Nov 25 14:58:15 crc kubenswrapper[4731]: I1125 14:58:15.770264 4731 scope.go:117] "RemoveContainer" containerID="d93e5ec1498d43e946de212d2b66ea0b7801ac9253606124984781ef8b3fc970" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.104580 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" podUID="eea531d8-c1b6-4b38-908e-e385c965d06e" containerName="oauth-openshift" containerID="cri-o://99d4d85a43ffeb2782f5248a08ff0c9dc3c54928e2c7e18db735cd45a8d972e4" gracePeriod=15 Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.543165 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.605137 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.661254 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-serving-cert\") pod \"eea531d8-c1b6-4b38-908e-e385c965d06e\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.661318 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eea531d8-c1b6-4b38-908e-e385c965d06e-audit-dir\") pod \"eea531d8-c1b6-4b38-908e-e385c965d06e\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.661346 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-trusted-ca-bundle\") pod \"eea531d8-c1b6-4b38-908e-e385c965d06e\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.661397 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-664g5\" (UniqueName: \"kubernetes.io/projected/eea531d8-c1b6-4b38-908e-e385c965d06e-kube-api-access-664g5\") pod \"eea531d8-c1b6-4b38-908e-e385c965d06e\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.661433 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-session\") pod \"eea531d8-c1b6-4b38-908e-e385c965d06e\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.661462 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-ocp-branding-template\") pod \"eea531d8-c1b6-4b38-908e-e385c965d06e\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.661472 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eea531d8-c1b6-4b38-908e-e385c965d06e-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "eea531d8-c1b6-4b38-908e-e385c965d06e" (UID: "eea531d8-c1b6-4b38-908e-e385c965d06e"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.661490 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-service-ca\") pod \"eea531d8-c1b6-4b38-908e-e385c965d06e\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.661569 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-error\") pod \"eea531d8-c1b6-4b38-908e-e385c965d06e\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.661617 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-cliconfig\") pod \"eea531d8-c1b6-4b38-908e-e385c965d06e\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.661665 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-provider-selection\") pod \"eea531d8-c1b6-4b38-908e-e385c965d06e\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.661760 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-audit-policies\") pod \"eea531d8-c1b6-4b38-908e-e385c965d06e\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.661793 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-login\") pod \"eea531d8-c1b6-4b38-908e-e385c965d06e\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.661861 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-idp-0-file-data\") pod \"eea531d8-c1b6-4b38-908e-e385c965d06e\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.661901 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-router-certs\") pod \"eea531d8-c1b6-4b38-908e-e385c965d06e\" (UID: \"eea531d8-c1b6-4b38-908e-e385c965d06e\") " Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.662159 4731 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/eea531d8-c1b6-4b38-908e-e385c965d06e-audit-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.662967 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "eea531d8-c1b6-4b38-908e-e385c965d06e" (UID: "eea531d8-c1b6-4b38-908e-e385c965d06e"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.663564 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "eea531d8-c1b6-4b38-908e-e385c965d06e" (UID: "eea531d8-c1b6-4b38-908e-e385c965d06e"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.663620 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "eea531d8-c1b6-4b38-908e-e385c965d06e" (UID: "eea531d8-c1b6-4b38-908e-e385c965d06e"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.663719 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "eea531d8-c1b6-4b38-908e-e385c965d06e" (UID: "eea531d8-c1b6-4b38-908e-e385c965d06e"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.668772 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "eea531d8-c1b6-4b38-908e-e385c965d06e" (UID: "eea531d8-c1b6-4b38-908e-e385c965d06e"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.678069 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "eea531d8-c1b6-4b38-908e-e385c965d06e" (UID: "eea531d8-c1b6-4b38-908e-e385c965d06e"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.687506 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "eea531d8-c1b6-4b38-908e-e385c965d06e" (UID: "eea531d8-c1b6-4b38-908e-e385c965d06e"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.687984 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "eea531d8-c1b6-4b38-908e-e385c965d06e" (UID: "eea531d8-c1b6-4b38-908e-e385c965d06e"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.688164 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "eea531d8-c1b6-4b38-908e-e385c965d06e" (UID: "eea531d8-c1b6-4b38-908e-e385c965d06e"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.688432 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "eea531d8-c1b6-4b38-908e-e385c965d06e" (UID: "eea531d8-c1b6-4b38-908e-e385c965d06e"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.692566 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea531d8-c1b6-4b38-908e-e385c965d06e-kube-api-access-664g5" (OuterVolumeSpecName: "kube-api-access-664g5") pod "eea531d8-c1b6-4b38-908e-e385c965d06e" (UID: "eea531d8-c1b6-4b38-908e-e385c965d06e"). InnerVolumeSpecName "kube-api-access-664g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.694086 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "eea531d8-c1b6-4b38-908e-e385c965d06e" (UID: "eea531d8-c1b6-4b38-908e-e385c965d06e"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.694793 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "eea531d8-c1b6-4b38-908e-e385c965d06e" (UID: "eea531d8-c1b6-4b38-908e-e385c965d06e"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.764298 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.764357 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.764378 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-664g5\" (UniqueName: \"kubernetes.io/projected/eea531d8-c1b6-4b38-908e-e385c965d06e-kube-api-access-664g5\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.764396 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.764416 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.764436 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.764454 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.764472 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.764493 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.764515 4731 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/eea531d8-c1b6-4b38-908e-e385c965d06e-audit-policies\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.764536 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.764554 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.764600 4731 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/eea531d8-c1b6-4b38-908e-e385c965d06e-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.783612 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.783711 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"743a8aee0ab54ddd1d88c869c91a409f885d5ca9434e01a350497401dd720bd1"} Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.785613 4731 generic.go:334] "Generic (PLEG): container finished" podID="eea531d8-c1b6-4b38-908e-e385c965d06e" containerID="99d4d85a43ffeb2782f5248a08ff0c9dc3c54928e2c7e18db735cd45a8d972e4" exitCode=0 Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.785669 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" event={"ID":"eea531d8-c1b6-4b38-908e-e385c965d06e","Type":"ContainerDied","Data":"99d4d85a43ffeb2782f5248a08ff0c9dc3c54928e2c7e18db735cd45a8d972e4"} Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.785683 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.785708 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fp5cw" event={"ID":"eea531d8-c1b6-4b38-908e-e385c965d06e","Type":"ContainerDied","Data":"18468b64520c921ea2a2154532a287ba720f0059429cee4e546aa925bf1cdceb"} Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.785732 4731 scope.go:117] "RemoveContainer" containerID="99d4d85a43ffeb2782f5248a08ff0c9dc3c54928e2c7e18db735cd45a8d972e4" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.832838 4731 scope.go:117] "RemoveContainer" containerID="99d4d85a43ffeb2782f5248a08ff0c9dc3c54928e2c7e18db735cd45a8d972e4" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.833009 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.833065 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:16 crc kubenswrapper[4731]: E1125 14:58:16.833666 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99d4d85a43ffeb2782f5248a08ff0c9dc3c54928e2c7e18db735cd45a8d972e4\": container with ID starting with 99d4d85a43ffeb2782f5248a08ff0c9dc3c54928e2c7e18db735cd45a8d972e4 not found: ID does not exist" containerID="99d4d85a43ffeb2782f5248a08ff0c9dc3c54928e2c7e18db735cd45a8d972e4" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.833744 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99d4d85a43ffeb2782f5248a08ff0c9dc3c54928e2c7e18db735cd45a8d972e4"} err="failed to get container status \"99d4d85a43ffeb2782f5248a08ff0c9dc3c54928e2c7e18db735cd45a8d972e4\": rpc error: code = NotFound desc = could not find container \"99d4d85a43ffeb2782f5248a08ff0c9dc3c54928e2c7e18db735cd45a8d972e4\": container with ID starting with 99d4d85a43ffeb2782f5248a08ff0c9dc3c54928e2c7e18db735cd45a8d972e4 not found: ID does not exist" Nov 25 14:58:16 crc kubenswrapper[4731]: I1125 14:58:16.840433 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:20 crc kubenswrapper[4731]: I1125 14:58:20.777353 4731 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:20 crc kubenswrapper[4731]: I1125 14:58:20.810542 4731 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e8368a5d-5e6f-4425-8187-46cd17cca8fd" Nov 25 14:58:20 crc kubenswrapper[4731]: I1125 14:58:20.810588 4731 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e8368a5d-5e6f-4425-8187-46cd17cca8fd" Nov 25 14:58:20 crc kubenswrapper[4731]: I1125 14:58:20.821997 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:20 crc kubenswrapper[4731]: I1125 14:58:20.823389 4731 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="011cfca3-2c1a-4d31-96f3-3bfb75dbb5bb" Nov 25 14:58:21 crc kubenswrapper[4731]: I1125 14:58:21.814909 4731 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e8368a5d-5e6f-4425-8187-46cd17cca8fd" Nov 25 14:58:21 crc kubenswrapper[4731]: I1125 14:58:21.814937 4731 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="e8368a5d-5e6f-4425-8187-46cd17cca8fd" Nov 25 14:58:21 crc kubenswrapper[4731]: I1125 14:58:21.819349 4731 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="011cfca3-2c1a-4d31-96f3-3bfb75dbb5bb" Nov 25 14:58:22 crc kubenswrapper[4731]: I1125 14:58:22.313151 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:58:26 crc kubenswrapper[4731]: I1125 14:58:26.605654 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:58:26 crc kubenswrapper[4731]: I1125 14:58:26.609703 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:58:26 crc kubenswrapper[4731]: I1125 14:58:26.844062 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Nov 25 14:58:27 crc kubenswrapper[4731]: I1125 14:58:27.469179 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Nov 25 14:58:28 crc kubenswrapper[4731]: I1125 14:58:28.980917 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Nov 25 14:58:29 crc kubenswrapper[4731]: I1125 14:58:29.792955 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Nov 25 14:58:30 crc kubenswrapper[4731]: I1125 14:58:30.603315 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Nov 25 14:58:31 crc kubenswrapper[4731]: I1125 14:58:31.125391 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Nov 25 14:58:31 crc kubenswrapper[4731]: I1125 14:58:31.192793 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Nov 25 14:58:31 crc kubenswrapper[4731]: I1125 14:58:31.957651 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Nov 25 14:58:32 crc kubenswrapper[4731]: I1125 14:58:32.296565 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Nov 25 14:58:32 crc kubenswrapper[4731]: I1125 14:58:32.322733 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Nov 25 14:58:32 crc kubenswrapper[4731]: I1125 14:58:32.371743 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Nov 25 14:58:32 crc kubenswrapper[4731]: I1125 14:58:32.382855 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Nov 25 14:58:32 crc kubenswrapper[4731]: I1125 14:58:32.997700 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Nov 25 14:58:33 crc kubenswrapper[4731]: I1125 14:58:33.056031 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Nov 25 14:58:33 crc kubenswrapper[4731]: I1125 14:58:33.072481 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Nov 25 14:58:33 crc kubenswrapper[4731]: I1125 14:58:33.148132 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Nov 25 14:58:33 crc kubenswrapper[4731]: I1125 14:58:33.383103 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Nov 25 14:58:33 crc kubenswrapper[4731]: I1125 14:58:33.434840 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Nov 25 14:58:33 crc kubenswrapper[4731]: I1125 14:58:33.650075 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Nov 25 14:58:33 crc kubenswrapper[4731]: I1125 14:58:33.681040 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Nov 25 14:58:33 crc kubenswrapper[4731]: I1125 14:58:33.745496 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Nov 25 14:58:33 crc kubenswrapper[4731]: I1125 14:58:33.857866 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Nov 25 14:58:33 crc kubenswrapper[4731]: I1125 14:58:33.871938 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Nov 25 14:58:34 crc kubenswrapper[4731]: I1125 14:58:34.071774 4731 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Nov 25 14:58:34 crc kubenswrapper[4731]: I1125 14:58:34.178489 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Nov 25 14:58:34 crc kubenswrapper[4731]: I1125 14:58:34.237901 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Nov 25 14:58:34 crc kubenswrapper[4731]: I1125 14:58:34.296241 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Nov 25 14:58:34 crc kubenswrapper[4731]: I1125 14:58:34.314561 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Nov 25 14:58:34 crc kubenswrapper[4731]: I1125 14:58:34.360154 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Nov 25 14:58:34 crc kubenswrapper[4731]: I1125 14:58:34.595580 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Nov 25 14:58:34 crc kubenswrapper[4731]: I1125 14:58:34.705915 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Nov 25 14:58:34 crc kubenswrapper[4731]: I1125 14:58:34.801624 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Nov 25 14:58:34 crc kubenswrapper[4731]: I1125 14:58:34.933254 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.130379 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.160414 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.296237 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.332268 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.368014 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.440806 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.516074 4731 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.521689 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-fp5cw"] Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.521786 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.527366 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.530929 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.538363 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.538338279 podStartE2EDuration="15.538338279s" podCreationTimestamp="2025-11-25 14:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:58:35.537728434 +0000 UTC m=+274.284862874" watchObservedRunningTime="2025-11-25 14:58:35.538338279 +0000 UTC m=+274.285472729" Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.574342 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.615543 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.650291 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.652452 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.790439 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.805501 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Nov 25 14:58:35 crc kubenswrapper[4731]: I1125 14:58:35.945687 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Nov 25 14:58:36 crc kubenswrapper[4731]: I1125 14:58:36.043566 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Nov 25 14:58:36 crc kubenswrapper[4731]: I1125 14:58:36.050677 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Nov 25 14:58:36 crc kubenswrapper[4731]: I1125 14:58:36.189877 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Nov 25 14:58:36 crc kubenswrapper[4731]: I1125 14:58:36.298053 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Nov 25 14:58:36 crc kubenswrapper[4731]: I1125 14:58:36.406463 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Nov 25 14:58:36 crc kubenswrapper[4731]: I1125 14:58:36.555458 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Nov 25 14:58:36 crc kubenswrapper[4731]: I1125 14:58:36.596191 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Nov 25 14:58:36 crc kubenswrapper[4731]: I1125 14:58:36.763111 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 14:58:36 crc kubenswrapper[4731]: I1125 14:58:36.787328 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Nov 25 14:58:36 crc kubenswrapper[4731]: I1125 14:58:36.799774 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Nov 25 14:58:36 crc kubenswrapper[4731]: I1125 14:58:36.821542 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea531d8-c1b6-4b38-908e-e385c965d06e" path="/var/lib/kubelet/pods/eea531d8-c1b6-4b38-908e-e385c965d06e/volumes" Nov 25 14:58:36 crc kubenswrapper[4731]: I1125 14:58:36.974501 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.024984 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.054377 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.081628 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.147590 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.167278 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.180317 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.302453 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.336211 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.419903 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.419917 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.449651 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.557722 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.650012 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.798159 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.876052 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.883638 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.939851 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Nov 25 14:58:37 crc kubenswrapper[4731]: I1125 14:58:37.942490 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.077311 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.120353 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.155973 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.171367 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.235566 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.237556 4731 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.258560 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.276404 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.294831 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.319386 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.343440 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.501476 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.523101 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.532067 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.541352 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.566716 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.651370 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.662300 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.693494 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.770144 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.775959 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.829916 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.840563 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.841393 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.866035 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.876084 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.991545 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Nov 25 14:58:38 crc kubenswrapper[4731]: I1125 14:58:38.998350 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.054786 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.068035 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.160692 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.211605 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.272894 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.285889 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.297272 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.336104 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.457520 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.522700 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.591230 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.694364 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.716430 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.811425 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.821931 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.926628 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Nov 25 14:58:39 crc kubenswrapper[4731]: I1125 14:58:39.963380 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.110019 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.110315 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.111042 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.114195 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.159560 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.225459 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.227208 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.252065 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.272748 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.367211 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.388603 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.413392 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.497372 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.506545 4731 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.533145 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.703866 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.717256 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.717955 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.734328 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.841744 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.857529 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Nov 25 14:58:40 crc kubenswrapper[4731]: I1125 14:58:40.873860 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.084811 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.253865 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.267568 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.320132 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6f9b8778db-hwrn2"] Nov 25 14:58:41 crc kubenswrapper[4731]: E1125 14:58:41.320367 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d487e2a-09a0-47ff-90eb-0ed913b60ba0" containerName="installer" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.320381 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d487e2a-09a0-47ff-90eb-0ed913b60ba0" containerName="installer" Nov 25 14:58:41 crc kubenswrapper[4731]: E1125 14:58:41.320396 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea531d8-c1b6-4b38-908e-e385c965d06e" containerName="oauth-openshift" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.320401 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea531d8-c1b6-4b38-908e-e385c965d06e" containerName="oauth-openshift" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.320493 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d487e2a-09a0-47ff-90eb-0ed913b60ba0" containerName="installer" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.320511 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea531d8-c1b6-4b38-908e-e385c965d06e" containerName="oauth-openshift" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.320963 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.323651 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.325081 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.325275 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.325322 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.325436 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.325486 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.325563 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.327495 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.327848 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.327845 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.328140 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.334584 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.341324 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.341424 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f9b8778db-hwrn2"] Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.345207 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.347027 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.460557 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.460604 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.460629 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.460872 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-audit-dir\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.460971 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.461024 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.461082 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.461212 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkm8n\" (UniqueName: \"kubernetes.io/projected/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-kube-api-access-rkm8n\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.461313 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.461353 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-audit-policies\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.461381 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-session\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.461405 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.461431 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-user-template-error\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.461458 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-user-template-login\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.462353 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.562994 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.563054 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.563094 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.563147 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-audit-dir\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.563188 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.563216 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.563258 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.563295 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkm8n\" (UniqueName: \"kubernetes.io/projected/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-kube-api-access-rkm8n\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.563309 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-audit-dir\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.564341 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.564404 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-session\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.564433 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-audit-policies\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.564459 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-user-template-error\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.564486 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.564512 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-user-template-login\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.565268 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.565330 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.565906 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-audit-policies\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.565949 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.566116 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.569330 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.569570 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.569809 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-user-template-login\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.570290 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.571067 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.571503 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.575492 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-user-template-error\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.577520 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.578240 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-v4-0-config-system-session\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.583299 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkm8n\" (UniqueName: \"kubernetes.io/projected/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed-kube-api-access-rkm8n\") pod \"oauth-openshift-6f9b8778db-hwrn2\" (UID: \"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\") " pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.645714 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.704863 4731 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.729876 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.731200 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.765719 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.820411 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.860334 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.906806 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.980999 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Nov 25 14:58:41 crc kubenswrapper[4731]: I1125 14:58:41.986855 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.012223 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.111298 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.278639 4731 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.279078 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://3ff470a63215bf89993c7bf4cdad5c6f767ed4715933095b9e32cb7d81507edc" gracePeriod=5 Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.302838 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.487001 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.494459 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.519641 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.540593 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.540698 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.577435 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.639267 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.665935 4731 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.721373 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.805771 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.833829 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Nov 25 14:58:42 crc kubenswrapper[4731]: I1125 14:58:42.936506 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Nov 25 14:58:43 crc kubenswrapper[4731]: I1125 14:58:43.017938 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Nov 25 14:58:43 crc kubenswrapper[4731]: I1125 14:58:43.058527 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Nov 25 14:58:43 crc kubenswrapper[4731]: I1125 14:58:43.086929 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Nov 25 14:58:43 crc kubenswrapper[4731]: I1125 14:58:43.207583 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Nov 25 14:58:43 crc kubenswrapper[4731]: I1125 14:58:43.235006 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Nov 25 14:58:43 crc kubenswrapper[4731]: I1125 14:58:43.238745 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Nov 25 14:58:43 crc kubenswrapper[4731]: I1125 14:58:43.310926 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Nov 25 14:58:43 crc kubenswrapper[4731]: I1125 14:58:43.392796 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 14:58:43 crc kubenswrapper[4731]: I1125 14:58:43.550292 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 14:58:43 crc kubenswrapper[4731]: I1125 14:58:43.617022 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Nov 25 14:58:43 crc kubenswrapper[4731]: I1125 14:58:43.710545 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Nov 25 14:58:43 crc kubenswrapper[4731]: I1125 14:58:43.772008 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Nov 25 14:58:43 crc kubenswrapper[4731]: I1125 14:58:43.791249 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Nov 25 14:58:43 crc kubenswrapper[4731]: I1125 14:58:43.829321 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Nov 25 14:58:43 crc kubenswrapper[4731]: I1125 14:58:43.877735 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Nov 25 14:58:43 crc kubenswrapper[4731]: I1125 14:58:43.925551 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.086642 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.141443 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.184663 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.284851 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.286467 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.286692 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.286920 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.339262 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.450277 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.555274 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.568150 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Nov 25 14:58:44 crc kubenswrapper[4731]: E1125 14:58:44.569283 4731 log.go:32] "RunPodSandbox from runtime service failed" err=< Nov 25 14:58:44 crc kubenswrapper[4731]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-6f9b8778db-hwrn2_openshift-authentication_109b4ff4-a4cb-43d8-98b2-2d72fa0104ed_0(fb04c611f02299e136267def090ba88e135f0fc10d028882c317808aa2b3ed1e): error adding pod openshift-authentication_oauth-openshift-6f9b8778db-hwrn2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fb04c611f02299e136267def090ba88e135f0fc10d028882c317808aa2b3ed1e" Netns:"/var/run/netns/14d6ad0d-c7e4-41a3-baef-6ec56d9e51f8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-6f9b8778db-hwrn2;K8S_POD_INFRA_CONTAINER_ID=fb04c611f02299e136267def090ba88e135f0fc10d028882c317808aa2b3ed1e;K8S_POD_UID=109b4ff4-a4cb-43d8-98b2-2d72fa0104ed" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-6f9b8778db-hwrn2] networking: Multus: [openshift-authentication/oauth-openshift-6f9b8778db-hwrn2/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-6f9b8778db-hwrn2 in out of cluster comm: pod "oauth-openshift-6f9b8778db-hwrn2" not found Nov 25 14:58:44 crc kubenswrapper[4731]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 14:58:44 crc kubenswrapper[4731]: > Nov 25 14:58:44 crc kubenswrapper[4731]: E1125 14:58:44.569335 4731 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Nov 25 14:58:44 crc kubenswrapper[4731]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-6f9b8778db-hwrn2_openshift-authentication_109b4ff4-a4cb-43d8-98b2-2d72fa0104ed_0(fb04c611f02299e136267def090ba88e135f0fc10d028882c317808aa2b3ed1e): error adding pod openshift-authentication_oauth-openshift-6f9b8778db-hwrn2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fb04c611f02299e136267def090ba88e135f0fc10d028882c317808aa2b3ed1e" Netns:"/var/run/netns/14d6ad0d-c7e4-41a3-baef-6ec56d9e51f8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-6f9b8778db-hwrn2;K8S_POD_INFRA_CONTAINER_ID=fb04c611f02299e136267def090ba88e135f0fc10d028882c317808aa2b3ed1e;K8S_POD_UID=109b4ff4-a4cb-43d8-98b2-2d72fa0104ed" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-6f9b8778db-hwrn2] networking: Multus: [openshift-authentication/oauth-openshift-6f9b8778db-hwrn2/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-6f9b8778db-hwrn2 in out of cluster comm: pod "oauth-openshift-6f9b8778db-hwrn2" not found Nov 25 14:58:44 crc kubenswrapper[4731]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 14:58:44 crc kubenswrapper[4731]: > pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:44 crc kubenswrapper[4731]: E1125 14:58:44.569354 4731 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Nov 25 14:58:44 crc kubenswrapper[4731]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-6f9b8778db-hwrn2_openshift-authentication_109b4ff4-a4cb-43d8-98b2-2d72fa0104ed_0(fb04c611f02299e136267def090ba88e135f0fc10d028882c317808aa2b3ed1e): error adding pod openshift-authentication_oauth-openshift-6f9b8778db-hwrn2 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"fb04c611f02299e136267def090ba88e135f0fc10d028882c317808aa2b3ed1e" Netns:"/var/run/netns/14d6ad0d-c7e4-41a3-baef-6ec56d9e51f8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-6f9b8778db-hwrn2;K8S_POD_INFRA_CONTAINER_ID=fb04c611f02299e136267def090ba88e135f0fc10d028882c317808aa2b3ed1e;K8S_POD_UID=109b4ff4-a4cb-43d8-98b2-2d72fa0104ed" Path:"" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-6f9b8778db-hwrn2] networking: Multus: [openshift-authentication/oauth-openshift-6f9b8778db-hwrn2/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-6f9b8778db-hwrn2 in out of cluster comm: pod "oauth-openshift-6f9b8778db-hwrn2" not found Nov 25 14:58:44 crc kubenswrapper[4731]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Nov 25 14:58:44 crc kubenswrapper[4731]: > pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:44 crc kubenswrapper[4731]: E1125 14:58:44.569412 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"oauth-openshift-6f9b8778db-hwrn2_openshift-authentication(109b4ff4-a4cb-43d8-98b2-2d72fa0104ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"oauth-openshift-6f9b8778db-hwrn2_openshift-authentication(109b4ff4-a4cb-43d8-98b2-2d72fa0104ed)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_oauth-openshift-6f9b8778db-hwrn2_openshift-authentication_109b4ff4-a4cb-43d8-98b2-2d72fa0104ed_0(fb04c611f02299e136267def090ba88e135f0fc10d028882c317808aa2b3ed1e): error adding pod openshift-authentication_oauth-openshift-6f9b8778db-hwrn2 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"fb04c611f02299e136267def090ba88e135f0fc10d028882c317808aa2b3ed1e\\\" Netns:\\\"/var/run/netns/14d6ad0d-c7e4-41a3-baef-6ec56d9e51f8\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-authentication;K8S_POD_NAME=oauth-openshift-6f9b8778db-hwrn2;K8S_POD_INFRA_CONTAINER_ID=fb04c611f02299e136267def090ba88e135f0fc10d028882c317808aa2b3ed1e;K8S_POD_UID=109b4ff4-a4cb-43d8-98b2-2d72fa0104ed\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-authentication/oauth-openshift-6f9b8778db-hwrn2] networking: Multus: [openshift-authentication/oauth-openshift-6f9b8778db-hwrn2/109b4ff4-a4cb-43d8-98b2-2d72fa0104ed]: error setting the networks status, pod was already deleted: SetPodNetworkStatusAnnotation: failed to query the pod oauth-openshift-6f9b8778db-hwrn2 in out of cluster comm: pod \\\"oauth-openshift-6f9b8778db-hwrn2\\\" not found\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" podUID="109b4ff4-a4cb-43d8-98b2-2d72fa0104ed" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.621955 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.873241 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.892970 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.918016 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.937026 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:44 crc kubenswrapper[4731]: I1125 14:58:44.937726 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:45 crc kubenswrapper[4731]: I1125 14:58:45.051501 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Nov 25 14:58:45 crc kubenswrapper[4731]: I1125 14:58:45.119010 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Nov 25 14:58:45 crc kubenswrapper[4731]: I1125 14:58:45.220588 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Nov 25 14:58:45 crc kubenswrapper[4731]: I1125 14:58:45.271465 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Nov 25 14:58:45 crc kubenswrapper[4731]: I1125 14:58:45.306790 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Nov 25 14:58:45 crc kubenswrapper[4731]: I1125 14:58:45.395997 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Nov 25 14:58:45 crc kubenswrapper[4731]: I1125 14:58:45.412411 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Nov 25 14:58:45 crc kubenswrapper[4731]: I1125 14:58:45.496546 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Nov 25 14:58:45 crc kubenswrapper[4731]: I1125 14:58:45.611876 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Nov 25 14:58:45 crc kubenswrapper[4731]: I1125 14:58:45.667996 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Nov 25 14:58:45 crc kubenswrapper[4731]: I1125 14:58:45.679116 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 14:58:45 crc kubenswrapper[4731]: I1125 14:58:45.728328 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Nov 25 14:58:45 crc kubenswrapper[4731]: I1125 14:58:45.861540 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Nov 25 14:58:45 crc kubenswrapper[4731]: I1125 14:58:45.962528 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Nov 25 14:58:45 crc kubenswrapper[4731]: I1125 14:58:45.990804 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Nov 25 14:58:46 crc kubenswrapper[4731]: I1125 14:58:46.220133 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Nov 25 14:58:46 crc kubenswrapper[4731]: I1125 14:58:46.255859 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Nov 25 14:58:46 crc kubenswrapper[4731]: I1125 14:58:46.306032 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Nov 25 14:58:46 crc kubenswrapper[4731]: I1125 14:58:46.391468 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Nov 25 14:58:46 crc kubenswrapper[4731]: I1125 14:58:46.412971 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Nov 25 14:58:46 crc kubenswrapper[4731]: I1125 14:58:46.430257 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Nov 25 14:58:46 crc kubenswrapper[4731]: I1125 14:58:46.566187 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Nov 25 14:58:46 crc kubenswrapper[4731]: I1125 14:58:46.721303 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Nov 25 14:58:46 crc kubenswrapper[4731]: I1125 14:58:46.813345 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Nov 25 14:58:46 crc kubenswrapper[4731]: I1125 14:58:46.878103 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.394127 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.421183 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.421266 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.481426 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.553077 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.553869 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.553962 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.554268 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.554275 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.554361 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.554375 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.554396 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.554502 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.555057 4731 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.555124 4731 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.555143 4731 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.555194 4731 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.564680 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.579866 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.626649 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.656338 4731 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.660629 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f9b8778db-hwrn2"] Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.771758 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.888633 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.966041 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" event={"ID":"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed","Type":"ContainerStarted","Data":"936bb0bafb50efba7aec15effe61d8ec4b5c468de8a854c797338a3581aa8a61"} Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.966105 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" event={"ID":"109b4ff4-a4cb-43d8-98b2-2d72fa0104ed","Type":"ContainerStarted","Data":"4fc31a0b25852398680e1b7d0bdff3b68a2dd8bfa2f47766582d0255f38814f9"} Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.966525 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.968733 4731 patch_prober.go:28] interesting pod/oauth-openshift-6f9b8778db-hwrn2 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" start-of-body= Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.968807 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" podUID="109b4ff4-a4cb-43d8-98b2-2d72fa0104ed" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.56:6443/healthz\": dial tcp 10.217.0.56:6443: connect: connection refused" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.970385 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.970498 4731 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="3ff470a63215bf89993c7bf4cdad5c6f767ed4715933095b9e32cb7d81507edc" exitCode=137 Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.970583 4731 scope.go:117] "RemoveContainer" containerID="3ff470a63215bf89993c7bf4cdad5c6f767ed4715933095b9e32cb7d81507edc" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.970895 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.990623 4731 scope.go:117] "RemoveContainer" containerID="3ff470a63215bf89993c7bf4cdad5c6f767ed4715933095b9e32cb7d81507edc" Nov 25 14:58:47 crc kubenswrapper[4731]: E1125 14:58:47.992040 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ff470a63215bf89993c7bf4cdad5c6f767ed4715933095b9e32cb7d81507edc\": container with ID starting with 3ff470a63215bf89993c7bf4cdad5c6f767ed4715933095b9e32cb7d81507edc not found: ID does not exist" containerID="3ff470a63215bf89993c7bf4cdad5c6f767ed4715933095b9e32cb7d81507edc" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.992187 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ff470a63215bf89993c7bf4cdad5c6f767ed4715933095b9e32cb7d81507edc"} err="failed to get container status \"3ff470a63215bf89993c7bf4cdad5c6f767ed4715933095b9e32cb7d81507edc\": rpc error: code = NotFound desc = could not find container \"3ff470a63215bf89993c7bf4cdad5c6f767ed4715933095b9e32cb7d81507edc\": container with ID starting with 3ff470a63215bf89993c7bf4cdad5c6f767ed4715933095b9e32cb7d81507edc not found: ID does not exist" Nov 25 14:58:47 crc kubenswrapper[4731]: I1125 14:58:47.999790 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" podStartSLOduration=56.999756624 podStartE2EDuration="56.999756624s" podCreationTimestamp="2025-11-25 14:57:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:58:47.9913885 +0000 UTC m=+286.738522950" watchObservedRunningTime="2025-11-25 14:58:47.999756624 +0000 UTC m=+286.746891054" Nov 25 14:58:48 crc kubenswrapper[4731]: I1125 14:58:48.820485 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Nov 25 14:58:48 crc kubenswrapper[4731]: I1125 14:58:48.826317 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Nov 25 14:58:48 crc kubenswrapper[4731]: I1125 14:58:48.991707 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6f9b8778db-hwrn2" Nov 25 14:58:49 crc kubenswrapper[4731]: I1125 14:58:49.381760 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.078107 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q9p29"] Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.078994 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q9p29" podUID="b750a03d-5b79-4d0c-b569-674089d31556" containerName="registry-server" containerID="cri-o://55a966fe9c4c77ec432255dd0972b9d27f40ed11f2327f836be5b47fd47ea00a" gracePeriod=30 Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.085026 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zjpwq"] Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.085347 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zjpwq" podUID="4c945221-c3ce-443a-82c2-5eb45a00f65c" containerName="registry-server" containerID="cri-o://88715a7897c189c6b2a44219c42ec3c4d8d331ca62857438384f8c6ff0748c4a" gracePeriod=30 Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.106693 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t5mpt"] Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.106986 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" podUID="eb067501-cc83-4cdd-ad52-224774a043c4" containerName="marketplace-operator" containerID="cri-o://74525b6fba75c77f1efd4243a14d374adb63ac2f0ff84bd21ac1f720c0a7d7e2" gracePeriod=30 Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.137061 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qj7cc"] Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.137456 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qj7cc" podUID="9d0d6998-e333-4b4e-93c3-d2d346675f07" containerName="registry-server" containerID="cri-o://21b3a5928d1bac60766588fce343ca3908b0d548e24d88624888e7d625d72b25" gracePeriod=30 Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.162103 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-md9zz"] Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.162480 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-md9zz" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" containerName="registry-server" containerID="cri-o://ee63d3d6aabbe7d873beb1badc3f4bf160a0acdcb416e315a3c37fe618db38b1" gracePeriod=30 Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.167011 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vk7l9"] Nov 25 14:58:54 crc kubenswrapper[4731]: E1125 14:58:54.167436 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.167479 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.167648 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.168405 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.175850 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vk7l9"] Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.258452 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2807ad74-1d0f-435b-bfb1-3b5eb4dca20d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vk7l9\" (UID: \"2807ad74-1d0f-435b-bfb1-3b5eb4dca20d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.258548 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl2r5\" (UniqueName: \"kubernetes.io/projected/2807ad74-1d0f-435b-bfb1-3b5eb4dca20d-kube-api-access-jl2r5\") pod \"marketplace-operator-79b997595-vk7l9\" (UID: \"2807ad74-1d0f-435b-bfb1-3b5eb4dca20d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.258593 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2807ad74-1d0f-435b-bfb1-3b5eb4dca20d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vk7l9\" (UID: \"2807ad74-1d0f-435b-bfb1-3b5eb4dca20d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.360472 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2807ad74-1d0f-435b-bfb1-3b5eb4dca20d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vk7l9\" (UID: \"2807ad74-1d0f-435b-bfb1-3b5eb4dca20d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.360552 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl2r5\" (UniqueName: \"kubernetes.io/projected/2807ad74-1d0f-435b-bfb1-3b5eb4dca20d-kube-api-access-jl2r5\") pod \"marketplace-operator-79b997595-vk7l9\" (UID: \"2807ad74-1d0f-435b-bfb1-3b5eb4dca20d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.360593 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2807ad74-1d0f-435b-bfb1-3b5eb4dca20d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vk7l9\" (UID: \"2807ad74-1d0f-435b-bfb1-3b5eb4dca20d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.362742 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2807ad74-1d0f-435b-bfb1-3b5eb4dca20d-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-vk7l9\" (UID: \"2807ad74-1d0f-435b-bfb1-3b5eb4dca20d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.371764 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/2807ad74-1d0f-435b-bfb1-3b5eb4dca20d-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-vk7l9\" (UID: \"2807ad74-1d0f-435b-bfb1-3b5eb4dca20d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.381235 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl2r5\" (UniqueName: \"kubernetes.io/projected/2807ad74-1d0f-435b-bfb1-3b5eb4dca20d-kube-api-access-jl2r5\") pod \"marketplace-operator-79b997595-vk7l9\" (UID: \"2807ad74-1d0f-435b-bfb1-3b5eb4dca20d\") " pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.568019 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.569142 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.580605 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.585914 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.601367 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.620593 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.664316 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b750a03d-5b79-4d0c-b569-674089d31556-utilities\") pod \"b750a03d-5b79-4d0c-b569-674089d31556\" (UID: \"b750a03d-5b79-4d0c-b569-674089d31556\") " Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.664410 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c945221-c3ce-443a-82c2-5eb45a00f65c-utilities\") pod \"4c945221-c3ce-443a-82c2-5eb45a00f65c\" (UID: \"4c945221-c3ce-443a-82c2-5eb45a00f65c\") " Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.664480 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv274\" (UniqueName: \"kubernetes.io/projected/eb067501-cc83-4cdd-ad52-224774a043c4-kube-api-access-mv274\") pod \"eb067501-cc83-4cdd-ad52-224774a043c4\" (UID: \"eb067501-cc83-4cdd-ad52-224774a043c4\") " Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.664513 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b750a03d-5b79-4d0c-b569-674089d31556-catalog-content\") pod \"b750a03d-5b79-4d0c-b569-674089d31556\" (UID: \"b750a03d-5b79-4d0c-b569-674089d31556\") " Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.664614 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c945221-c3ce-443a-82c2-5eb45a00f65c-catalog-content\") pod \"4c945221-c3ce-443a-82c2-5eb45a00f65c\" (UID: \"4c945221-c3ce-443a-82c2-5eb45a00f65c\") " Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.664640 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c27k8\" (UniqueName: \"kubernetes.io/projected/b750a03d-5b79-4d0c-b569-674089d31556-kube-api-access-c27k8\") pod \"b750a03d-5b79-4d0c-b569-674089d31556\" (UID: \"b750a03d-5b79-4d0c-b569-674089d31556\") " Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.664721 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-484x8\" (UniqueName: \"kubernetes.io/projected/4c945221-c3ce-443a-82c2-5eb45a00f65c-kube-api-access-484x8\") pod \"4c945221-c3ce-443a-82c2-5eb45a00f65c\" (UID: \"4c945221-c3ce-443a-82c2-5eb45a00f65c\") " Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.664786 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-trusted-ca\") pod \"eb067501-cc83-4cdd-ad52-224774a043c4\" (UID: \"eb067501-cc83-4cdd-ad52-224774a043c4\") " Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.664805 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-operator-metrics\") pod \"eb067501-cc83-4cdd-ad52-224774a043c4\" (UID: \"eb067501-cc83-4cdd-ad52-224774a043c4\") " Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.665352 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b750a03d-5b79-4d0c-b569-674089d31556-utilities" (OuterVolumeSpecName: "utilities") pod "b750a03d-5b79-4d0c-b569-674089d31556" (UID: "b750a03d-5b79-4d0c-b569-674089d31556"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.667525 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c945221-c3ce-443a-82c2-5eb45a00f65c-utilities" (OuterVolumeSpecName: "utilities") pod "4c945221-c3ce-443a-82c2-5eb45a00f65c" (UID: "4c945221-c3ce-443a-82c2-5eb45a00f65c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.670362 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b750a03d-5b79-4d0c-b569-674089d31556-kube-api-access-c27k8" (OuterVolumeSpecName: "kube-api-access-c27k8") pod "b750a03d-5b79-4d0c-b569-674089d31556" (UID: "b750a03d-5b79-4d0c-b569-674089d31556"). InnerVolumeSpecName "kube-api-access-c27k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.671222 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "eb067501-cc83-4cdd-ad52-224774a043c4" (UID: "eb067501-cc83-4cdd-ad52-224774a043c4"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.685357 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb067501-cc83-4cdd-ad52-224774a043c4-kube-api-access-mv274" (OuterVolumeSpecName: "kube-api-access-mv274") pod "eb067501-cc83-4cdd-ad52-224774a043c4" (UID: "eb067501-cc83-4cdd-ad52-224774a043c4"). InnerVolumeSpecName "kube-api-access-mv274". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.685951 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c945221-c3ce-443a-82c2-5eb45a00f65c-kube-api-access-484x8" (OuterVolumeSpecName: "kube-api-access-484x8") pod "4c945221-c3ce-443a-82c2-5eb45a00f65c" (UID: "4c945221-c3ce-443a-82c2-5eb45a00f65c"). InnerVolumeSpecName "kube-api-access-484x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.691948 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "eb067501-cc83-4cdd-ad52-224774a043c4" (UID: "eb067501-cc83-4cdd-ad52-224774a043c4"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.726330 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b750a03d-5b79-4d0c-b569-674089d31556-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b750a03d-5b79-4d0c-b569-674089d31556" (UID: "b750a03d-5b79-4d0c-b569-674089d31556"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.732082 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c945221-c3ce-443a-82c2-5eb45a00f65c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4c945221-c3ce-443a-82c2-5eb45a00f65c" (UID: "4c945221-c3ce-443a-82c2-5eb45a00f65c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.765703 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0d6998-e333-4b4e-93c3-d2d346675f07-utilities\") pod \"9d0d6998-e333-4b4e-93c3-d2d346675f07\" (UID: \"9d0d6998-e333-4b4e-93c3-d2d346675f07\") " Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.765789 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0d6998-e333-4b4e-93c3-d2d346675f07-catalog-content\") pod \"9d0d6998-e333-4b4e-93c3-d2d346675f07\" (UID: \"9d0d6998-e333-4b4e-93c3-d2d346675f07\") " Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.765851 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-catalog-content\") pod \"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca\" (UID: \"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca\") " Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.765981 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-utilities\") pod \"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca\" (UID: \"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca\") " Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.766012 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7qk7\" (UniqueName: \"kubernetes.io/projected/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-kube-api-access-r7qk7\") pod \"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca\" (UID: \"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca\") " Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.766062 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qszvq\" (UniqueName: \"kubernetes.io/projected/9d0d6998-e333-4b4e-93c3-d2d346675f07-kube-api-access-qszvq\") pod \"9d0d6998-e333-4b4e-93c3-d2d346675f07\" (UID: \"9d0d6998-e333-4b4e-93c3-d2d346675f07\") " Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.766332 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-484x8\" (UniqueName: \"kubernetes.io/projected/4c945221-c3ce-443a-82c2-5eb45a00f65c-kube-api-access-484x8\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.766354 4731 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.766368 4731 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/eb067501-cc83-4cdd-ad52-224774a043c4-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.766382 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b750a03d-5b79-4d0c-b569-674089d31556-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.766394 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4c945221-c3ce-443a-82c2-5eb45a00f65c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.766407 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv274\" (UniqueName: \"kubernetes.io/projected/eb067501-cc83-4cdd-ad52-224774a043c4-kube-api-access-mv274\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.766419 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b750a03d-5b79-4d0c-b569-674089d31556-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.766431 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4c945221-c3ce-443a-82c2-5eb45a00f65c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.766443 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c27k8\" (UniqueName: \"kubernetes.io/projected/b750a03d-5b79-4d0c-b569-674089d31556-kube-api-access-c27k8\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.766701 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d0d6998-e333-4b4e-93c3-d2d346675f07-utilities" (OuterVolumeSpecName: "utilities") pod "9d0d6998-e333-4b4e-93c3-d2d346675f07" (UID: "9d0d6998-e333-4b4e-93c3-d2d346675f07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.767250 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-utilities" (OuterVolumeSpecName: "utilities") pod "6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" (UID: "6c85939e-8d19-4f0a-8ad9-2dd4f29717ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.770191 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d0d6998-e333-4b4e-93c3-d2d346675f07-kube-api-access-qszvq" (OuterVolumeSpecName: "kube-api-access-qszvq") pod "9d0d6998-e333-4b4e-93c3-d2d346675f07" (UID: "9d0d6998-e333-4b4e-93c3-d2d346675f07"). InnerVolumeSpecName "kube-api-access-qszvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.770262 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-kube-api-access-r7qk7" (OuterVolumeSpecName: "kube-api-access-r7qk7") pod "6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" (UID: "6c85939e-8d19-4f0a-8ad9-2dd4f29717ca"). InnerVolumeSpecName "kube-api-access-r7qk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.790382 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d0d6998-e333-4b4e-93c3-d2d346675f07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9d0d6998-e333-4b4e-93c3-d2d346675f07" (UID: "9d0d6998-e333-4b4e-93c3-d2d346675f07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.827795 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-vk7l9"] Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.867313 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7qk7\" (UniqueName: \"kubernetes.io/projected/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-kube-api-access-r7qk7\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.867386 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.867421 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qszvq\" (UniqueName: \"kubernetes.io/projected/9d0d6998-e333-4b4e-93c3-d2d346675f07-kube-api-access-qszvq\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.867435 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d0d6998-e333-4b4e-93c3-d2d346675f07-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.867449 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d0d6998-e333-4b4e-93c3-d2d346675f07-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.868213 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" (UID: "6c85939e-8d19-4f0a-8ad9-2dd4f29717ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 14:58:54 crc kubenswrapper[4731]: I1125 14:58:54.969445 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.019711 4731 generic.go:334] "Generic (PLEG): container finished" podID="eb067501-cc83-4cdd-ad52-224774a043c4" containerID="74525b6fba75c77f1efd4243a14d374adb63ac2f0ff84bd21ac1f720c0a7d7e2" exitCode=0 Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.019831 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.019800 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" event={"ID":"eb067501-cc83-4cdd-ad52-224774a043c4","Type":"ContainerDied","Data":"74525b6fba75c77f1efd4243a14d374adb63ac2f0ff84bd21ac1f720c0a7d7e2"} Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.020495 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-t5mpt" event={"ID":"eb067501-cc83-4cdd-ad52-224774a043c4","Type":"ContainerDied","Data":"8748d2f508415e09f66e93d3a324abef1e94a93f0ecee0c1efa7a43f17cbe35d"} Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.020544 4731 scope.go:117] "RemoveContainer" containerID="74525b6fba75c77f1efd4243a14d374adb63ac2f0ff84bd21ac1f720c0a7d7e2" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.022984 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" event={"ID":"2807ad74-1d0f-435b-bfb1-3b5eb4dca20d","Type":"ContainerStarted","Data":"fcddcc2a3e763e0758c6e86b0102b2442d3ff0145d3b655c5e2b2e4c61312a8b"} Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.023039 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" event={"ID":"2807ad74-1d0f-435b-bfb1-3b5eb4dca20d","Type":"ContainerStarted","Data":"d6f8ee17f6f46dcea167539061b0e1e5b4524f5f6f88d789442b4aaf5dfcdff5"} Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.023415 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.026551 4731 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-vk7l9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" start-of-body= Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.026648 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" podUID="2807ad74-1d0f-435b-bfb1-3b5eb4dca20d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.57:8080/healthz\": dial tcp 10.217.0.57:8080: connect: connection refused" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.035102 4731 generic.go:334] "Generic (PLEG): container finished" podID="9d0d6998-e333-4b4e-93c3-d2d346675f07" containerID="21b3a5928d1bac60766588fce343ca3908b0d548e24d88624888e7d625d72b25" exitCode=0 Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.035264 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qj7cc" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.035282 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj7cc" event={"ID":"9d0d6998-e333-4b4e-93c3-d2d346675f07","Type":"ContainerDied","Data":"21b3a5928d1bac60766588fce343ca3908b0d548e24d88624888e7d625d72b25"} Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.035354 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qj7cc" event={"ID":"9d0d6998-e333-4b4e-93c3-d2d346675f07","Type":"ContainerDied","Data":"ee9598a7a6e9f4c4539a47bf34aa253c2f291d1534455df82e1913ef91109818"} Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.040238 4731 generic.go:334] "Generic (PLEG): container finished" podID="b750a03d-5b79-4d0c-b569-674089d31556" containerID="55a966fe9c4c77ec432255dd0972b9d27f40ed11f2327f836be5b47fd47ea00a" exitCode=0 Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.040339 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9p29" event={"ID":"b750a03d-5b79-4d0c-b569-674089d31556","Type":"ContainerDied","Data":"55a966fe9c4c77ec432255dd0972b9d27f40ed11f2327f836be5b47fd47ea00a"} Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.040428 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q9p29" event={"ID":"b750a03d-5b79-4d0c-b569-674089d31556","Type":"ContainerDied","Data":"084981be65b1653d62a97bd8ff952e3ed317474b066cdab2f42134e397ead574"} Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.040424 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q9p29" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.046540 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" podStartSLOduration=1.046497771 podStartE2EDuration="1.046497771s" podCreationTimestamp="2025-11-25 14:58:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:58:55.044268844 +0000 UTC m=+293.791403294" watchObservedRunningTime="2025-11-25 14:58:55.046497771 +0000 UTC m=+293.793632221" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.048711 4731 scope.go:117] "RemoveContainer" containerID="74525b6fba75c77f1efd4243a14d374adb63ac2f0ff84bd21ac1f720c0a7d7e2" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.048930 4731 generic.go:334] "Generic (PLEG): container finished" podID="4c945221-c3ce-443a-82c2-5eb45a00f65c" containerID="88715a7897c189c6b2a44219c42ec3c4d8d331ca62857438384f8c6ff0748c4a" exitCode=0 Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.049123 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zjpwq" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.049295 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zjpwq" event={"ID":"4c945221-c3ce-443a-82c2-5eb45a00f65c","Type":"ContainerDied","Data":"88715a7897c189c6b2a44219c42ec3c4d8d331ca62857438384f8c6ff0748c4a"} Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.049388 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zjpwq" event={"ID":"4c945221-c3ce-443a-82c2-5eb45a00f65c","Type":"ContainerDied","Data":"25943d9fff31d891bb04f7ed571beeb4cd86769d02960798aea5c34c65bff4c9"} Nov 25 14:58:55 crc kubenswrapper[4731]: E1125 14:58:55.051237 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74525b6fba75c77f1efd4243a14d374adb63ac2f0ff84bd21ac1f720c0a7d7e2\": container with ID starting with 74525b6fba75c77f1efd4243a14d374adb63ac2f0ff84bd21ac1f720c0a7d7e2 not found: ID does not exist" containerID="74525b6fba75c77f1efd4243a14d374adb63ac2f0ff84bd21ac1f720c0a7d7e2" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.051286 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74525b6fba75c77f1efd4243a14d374adb63ac2f0ff84bd21ac1f720c0a7d7e2"} err="failed to get container status \"74525b6fba75c77f1efd4243a14d374adb63ac2f0ff84bd21ac1f720c0a7d7e2\": rpc error: code = NotFound desc = could not find container \"74525b6fba75c77f1efd4243a14d374adb63ac2f0ff84bd21ac1f720c0a7d7e2\": container with ID starting with 74525b6fba75c77f1efd4243a14d374adb63ac2f0ff84bd21ac1f720c0a7d7e2 not found: ID does not exist" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.051325 4731 scope.go:117] "RemoveContainer" containerID="21b3a5928d1bac60766588fce343ca3908b0d548e24d88624888e7d625d72b25" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.055866 4731 generic.go:334] "Generic (PLEG): container finished" podID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" containerID="ee63d3d6aabbe7d873beb1badc3f4bf160a0acdcb416e315a3c37fe618db38b1" exitCode=0 Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.055943 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-md9zz" event={"ID":"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca","Type":"ContainerDied","Data":"ee63d3d6aabbe7d873beb1badc3f4bf160a0acdcb416e315a3c37fe618db38b1"} Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.055963 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-md9zz" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.055987 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-md9zz" event={"ID":"6c85939e-8d19-4f0a-8ad9-2dd4f29717ca","Type":"ContainerDied","Data":"f44c3660070b22b81d1aec91b377d8b8298562b43b2d4da5eb9bd324feb4a0ca"} Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.067784 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t5mpt"] Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.078216 4731 scope.go:117] "RemoveContainer" containerID="1311e651b90d08212e71c0a93d52e0667021b5a03ff22461c6372f56f11dab62" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.080405 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-t5mpt"] Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.089648 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qj7cc"] Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.100912 4731 scope.go:117] "RemoveContainer" containerID="e8db32b3c79a106ac3ffa8aea64f6244436898db069a72d6bdfe20a49aef6a69" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.102062 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qj7cc"] Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.111596 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zjpwq"] Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.116764 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zjpwq"] Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.119188 4731 scope.go:117] "RemoveContainer" containerID="21b3a5928d1bac60766588fce343ca3908b0d548e24d88624888e7d625d72b25" Nov 25 14:58:55 crc kubenswrapper[4731]: E1125 14:58:55.119710 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21b3a5928d1bac60766588fce343ca3908b0d548e24d88624888e7d625d72b25\": container with ID starting with 21b3a5928d1bac60766588fce343ca3908b0d548e24d88624888e7d625d72b25 not found: ID does not exist" containerID="21b3a5928d1bac60766588fce343ca3908b0d548e24d88624888e7d625d72b25" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.119794 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21b3a5928d1bac60766588fce343ca3908b0d548e24d88624888e7d625d72b25"} err="failed to get container status \"21b3a5928d1bac60766588fce343ca3908b0d548e24d88624888e7d625d72b25\": rpc error: code = NotFound desc = could not find container \"21b3a5928d1bac60766588fce343ca3908b0d548e24d88624888e7d625d72b25\": container with ID starting with 21b3a5928d1bac60766588fce343ca3908b0d548e24d88624888e7d625d72b25 not found: ID does not exist" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.119850 4731 scope.go:117] "RemoveContainer" containerID="1311e651b90d08212e71c0a93d52e0667021b5a03ff22461c6372f56f11dab62" Nov 25 14:58:55 crc kubenswrapper[4731]: E1125 14:58:55.120327 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1311e651b90d08212e71c0a93d52e0667021b5a03ff22461c6372f56f11dab62\": container with ID starting with 1311e651b90d08212e71c0a93d52e0667021b5a03ff22461c6372f56f11dab62 not found: ID does not exist" containerID="1311e651b90d08212e71c0a93d52e0667021b5a03ff22461c6372f56f11dab62" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.120349 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1311e651b90d08212e71c0a93d52e0667021b5a03ff22461c6372f56f11dab62"} err="failed to get container status \"1311e651b90d08212e71c0a93d52e0667021b5a03ff22461c6372f56f11dab62\": rpc error: code = NotFound desc = could not find container \"1311e651b90d08212e71c0a93d52e0667021b5a03ff22461c6372f56f11dab62\": container with ID starting with 1311e651b90d08212e71c0a93d52e0667021b5a03ff22461c6372f56f11dab62 not found: ID does not exist" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.120413 4731 scope.go:117] "RemoveContainer" containerID="e8db32b3c79a106ac3ffa8aea64f6244436898db069a72d6bdfe20a49aef6a69" Nov 25 14:58:55 crc kubenswrapper[4731]: E1125 14:58:55.120712 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8db32b3c79a106ac3ffa8aea64f6244436898db069a72d6bdfe20a49aef6a69\": container with ID starting with e8db32b3c79a106ac3ffa8aea64f6244436898db069a72d6bdfe20a49aef6a69 not found: ID does not exist" containerID="e8db32b3c79a106ac3ffa8aea64f6244436898db069a72d6bdfe20a49aef6a69" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.120742 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8db32b3c79a106ac3ffa8aea64f6244436898db069a72d6bdfe20a49aef6a69"} err="failed to get container status \"e8db32b3c79a106ac3ffa8aea64f6244436898db069a72d6bdfe20a49aef6a69\": rpc error: code = NotFound desc = could not find container \"e8db32b3c79a106ac3ffa8aea64f6244436898db069a72d6bdfe20a49aef6a69\": container with ID starting with e8db32b3c79a106ac3ffa8aea64f6244436898db069a72d6bdfe20a49aef6a69 not found: ID does not exist" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.120759 4731 scope.go:117] "RemoveContainer" containerID="55a966fe9c4c77ec432255dd0972b9d27f40ed11f2327f836be5b47fd47ea00a" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.122202 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-md9zz"] Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.126018 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-md9zz"] Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.131896 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q9p29"] Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.135207 4731 scope.go:117] "RemoveContainer" containerID="d987f63c18b798da76c3504ce4c85f57e4cc97f49914eca9afe50e3b7820355b" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.138318 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q9p29"] Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.155315 4731 scope.go:117] "RemoveContainer" containerID="8fc0cc25c398979f9c0ed43f245215d2ba72c3d3c925e5f682ce89f46b7823d0" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.175365 4731 scope.go:117] "RemoveContainer" containerID="55a966fe9c4c77ec432255dd0972b9d27f40ed11f2327f836be5b47fd47ea00a" Nov 25 14:58:55 crc kubenswrapper[4731]: E1125 14:58:55.176229 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55a966fe9c4c77ec432255dd0972b9d27f40ed11f2327f836be5b47fd47ea00a\": container with ID starting with 55a966fe9c4c77ec432255dd0972b9d27f40ed11f2327f836be5b47fd47ea00a not found: ID does not exist" containerID="55a966fe9c4c77ec432255dd0972b9d27f40ed11f2327f836be5b47fd47ea00a" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.176329 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a966fe9c4c77ec432255dd0972b9d27f40ed11f2327f836be5b47fd47ea00a"} err="failed to get container status \"55a966fe9c4c77ec432255dd0972b9d27f40ed11f2327f836be5b47fd47ea00a\": rpc error: code = NotFound desc = could not find container \"55a966fe9c4c77ec432255dd0972b9d27f40ed11f2327f836be5b47fd47ea00a\": container with ID starting with 55a966fe9c4c77ec432255dd0972b9d27f40ed11f2327f836be5b47fd47ea00a not found: ID does not exist" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.176378 4731 scope.go:117] "RemoveContainer" containerID="d987f63c18b798da76c3504ce4c85f57e4cc97f49914eca9afe50e3b7820355b" Nov 25 14:58:55 crc kubenswrapper[4731]: E1125 14:58:55.176999 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d987f63c18b798da76c3504ce4c85f57e4cc97f49914eca9afe50e3b7820355b\": container with ID starting with d987f63c18b798da76c3504ce4c85f57e4cc97f49914eca9afe50e3b7820355b not found: ID does not exist" containerID="d987f63c18b798da76c3504ce4c85f57e4cc97f49914eca9afe50e3b7820355b" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.177117 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d987f63c18b798da76c3504ce4c85f57e4cc97f49914eca9afe50e3b7820355b"} err="failed to get container status \"d987f63c18b798da76c3504ce4c85f57e4cc97f49914eca9afe50e3b7820355b\": rpc error: code = NotFound desc = could not find container \"d987f63c18b798da76c3504ce4c85f57e4cc97f49914eca9afe50e3b7820355b\": container with ID starting with d987f63c18b798da76c3504ce4c85f57e4cc97f49914eca9afe50e3b7820355b not found: ID does not exist" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.177171 4731 scope.go:117] "RemoveContainer" containerID="8fc0cc25c398979f9c0ed43f245215d2ba72c3d3c925e5f682ce89f46b7823d0" Nov 25 14:58:55 crc kubenswrapper[4731]: E1125 14:58:55.178898 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fc0cc25c398979f9c0ed43f245215d2ba72c3d3c925e5f682ce89f46b7823d0\": container with ID starting with 8fc0cc25c398979f9c0ed43f245215d2ba72c3d3c925e5f682ce89f46b7823d0 not found: ID does not exist" containerID="8fc0cc25c398979f9c0ed43f245215d2ba72c3d3c925e5f682ce89f46b7823d0" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.178947 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fc0cc25c398979f9c0ed43f245215d2ba72c3d3c925e5f682ce89f46b7823d0"} err="failed to get container status \"8fc0cc25c398979f9c0ed43f245215d2ba72c3d3c925e5f682ce89f46b7823d0\": rpc error: code = NotFound desc = could not find container \"8fc0cc25c398979f9c0ed43f245215d2ba72c3d3c925e5f682ce89f46b7823d0\": container with ID starting with 8fc0cc25c398979f9c0ed43f245215d2ba72c3d3c925e5f682ce89f46b7823d0 not found: ID does not exist" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.178981 4731 scope.go:117] "RemoveContainer" containerID="88715a7897c189c6b2a44219c42ec3c4d8d331ca62857438384f8c6ff0748c4a" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.198876 4731 scope.go:117] "RemoveContainer" containerID="f4762565231db80eccba2ce37dabeb77a1c77ff5c11441979f562487b6af0774" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.255933 4731 scope.go:117] "RemoveContainer" containerID="d2e01289a5b881eef3a3544aef10b194b6ba604d8cc30fb9276468706d6eea51" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.277870 4731 scope.go:117] "RemoveContainer" containerID="88715a7897c189c6b2a44219c42ec3c4d8d331ca62857438384f8c6ff0748c4a" Nov 25 14:58:55 crc kubenswrapper[4731]: E1125 14:58:55.278551 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88715a7897c189c6b2a44219c42ec3c4d8d331ca62857438384f8c6ff0748c4a\": container with ID starting with 88715a7897c189c6b2a44219c42ec3c4d8d331ca62857438384f8c6ff0748c4a not found: ID does not exist" containerID="88715a7897c189c6b2a44219c42ec3c4d8d331ca62857438384f8c6ff0748c4a" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.278584 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88715a7897c189c6b2a44219c42ec3c4d8d331ca62857438384f8c6ff0748c4a"} err="failed to get container status \"88715a7897c189c6b2a44219c42ec3c4d8d331ca62857438384f8c6ff0748c4a\": rpc error: code = NotFound desc = could not find container \"88715a7897c189c6b2a44219c42ec3c4d8d331ca62857438384f8c6ff0748c4a\": container with ID starting with 88715a7897c189c6b2a44219c42ec3c4d8d331ca62857438384f8c6ff0748c4a not found: ID does not exist" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.278617 4731 scope.go:117] "RemoveContainer" containerID="f4762565231db80eccba2ce37dabeb77a1c77ff5c11441979f562487b6af0774" Nov 25 14:58:55 crc kubenswrapper[4731]: E1125 14:58:55.279064 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4762565231db80eccba2ce37dabeb77a1c77ff5c11441979f562487b6af0774\": container with ID starting with f4762565231db80eccba2ce37dabeb77a1c77ff5c11441979f562487b6af0774 not found: ID does not exist" containerID="f4762565231db80eccba2ce37dabeb77a1c77ff5c11441979f562487b6af0774" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.279111 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4762565231db80eccba2ce37dabeb77a1c77ff5c11441979f562487b6af0774"} err="failed to get container status \"f4762565231db80eccba2ce37dabeb77a1c77ff5c11441979f562487b6af0774\": rpc error: code = NotFound desc = could not find container \"f4762565231db80eccba2ce37dabeb77a1c77ff5c11441979f562487b6af0774\": container with ID starting with f4762565231db80eccba2ce37dabeb77a1c77ff5c11441979f562487b6af0774 not found: ID does not exist" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.279143 4731 scope.go:117] "RemoveContainer" containerID="d2e01289a5b881eef3a3544aef10b194b6ba604d8cc30fb9276468706d6eea51" Nov 25 14:58:55 crc kubenswrapper[4731]: E1125 14:58:55.279676 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d2e01289a5b881eef3a3544aef10b194b6ba604d8cc30fb9276468706d6eea51\": container with ID starting with d2e01289a5b881eef3a3544aef10b194b6ba604d8cc30fb9276468706d6eea51 not found: ID does not exist" containerID="d2e01289a5b881eef3a3544aef10b194b6ba604d8cc30fb9276468706d6eea51" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.279724 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d2e01289a5b881eef3a3544aef10b194b6ba604d8cc30fb9276468706d6eea51"} err="failed to get container status \"d2e01289a5b881eef3a3544aef10b194b6ba604d8cc30fb9276468706d6eea51\": rpc error: code = NotFound desc = could not find container \"d2e01289a5b881eef3a3544aef10b194b6ba604d8cc30fb9276468706d6eea51\": container with ID starting with d2e01289a5b881eef3a3544aef10b194b6ba604d8cc30fb9276468706d6eea51 not found: ID does not exist" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.279746 4731 scope.go:117] "RemoveContainer" containerID="ee63d3d6aabbe7d873beb1badc3f4bf160a0acdcb416e315a3c37fe618db38b1" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.295903 4731 scope.go:117] "RemoveContainer" containerID="c54322ca9fc164fcfddd2f3c8e7252e16306197f4a4e4b9105b89a9a5dc98485" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.310182 4731 scope.go:117] "RemoveContainer" containerID="659f3927d58208d8d063b16dd56eb776f46e1acfadeaafcff7f64a691dbb795a" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.346443 4731 scope.go:117] "RemoveContainer" containerID="ee63d3d6aabbe7d873beb1badc3f4bf160a0acdcb416e315a3c37fe618db38b1" Nov 25 14:58:55 crc kubenswrapper[4731]: E1125 14:58:55.348319 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee63d3d6aabbe7d873beb1badc3f4bf160a0acdcb416e315a3c37fe618db38b1\": container with ID starting with ee63d3d6aabbe7d873beb1badc3f4bf160a0acdcb416e315a3c37fe618db38b1 not found: ID does not exist" containerID="ee63d3d6aabbe7d873beb1badc3f4bf160a0acdcb416e315a3c37fe618db38b1" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.348408 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee63d3d6aabbe7d873beb1badc3f4bf160a0acdcb416e315a3c37fe618db38b1"} err="failed to get container status \"ee63d3d6aabbe7d873beb1badc3f4bf160a0acdcb416e315a3c37fe618db38b1\": rpc error: code = NotFound desc = could not find container \"ee63d3d6aabbe7d873beb1badc3f4bf160a0acdcb416e315a3c37fe618db38b1\": container with ID starting with ee63d3d6aabbe7d873beb1badc3f4bf160a0acdcb416e315a3c37fe618db38b1 not found: ID does not exist" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.348454 4731 scope.go:117] "RemoveContainer" containerID="c54322ca9fc164fcfddd2f3c8e7252e16306197f4a4e4b9105b89a9a5dc98485" Nov 25 14:58:55 crc kubenswrapper[4731]: E1125 14:58:55.350213 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c54322ca9fc164fcfddd2f3c8e7252e16306197f4a4e4b9105b89a9a5dc98485\": container with ID starting with c54322ca9fc164fcfddd2f3c8e7252e16306197f4a4e4b9105b89a9a5dc98485 not found: ID does not exist" containerID="c54322ca9fc164fcfddd2f3c8e7252e16306197f4a4e4b9105b89a9a5dc98485" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.350254 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c54322ca9fc164fcfddd2f3c8e7252e16306197f4a4e4b9105b89a9a5dc98485"} err="failed to get container status \"c54322ca9fc164fcfddd2f3c8e7252e16306197f4a4e4b9105b89a9a5dc98485\": rpc error: code = NotFound desc = could not find container \"c54322ca9fc164fcfddd2f3c8e7252e16306197f4a4e4b9105b89a9a5dc98485\": container with ID starting with c54322ca9fc164fcfddd2f3c8e7252e16306197f4a4e4b9105b89a9a5dc98485 not found: ID does not exist" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.350289 4731 scope.go:117] "RemoveContainer" containerID="659f3927d58208d8d063b16dd56eb776f46e1acfadeaafcff7f64a691dbb795a" Nov 25 14:58:55 crc kubenswrapper[4731]: E1125 14:58:55.351190 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"659f3927d58208d8d063b16dd56eb776f46e1acfadeaafcff7f64a691dbb795a\": container with ID starting with 659f3927d58208d8d063b16dd56eb776f46e1acfadeaafcff7f64a691dbb795a not found: ID does not exist" containerID="659f3927d58208d8d063b16dd56eb776f46e1acfadeaafcff7f64a691dbb795a" Nov 25 14:58:55 crc kubenswrapper[4731]: I1125 14:58:55.351213 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"659f3927d58208d8d063b16dd56eb776f46e1acfadeaafcff7f64a691dbb795a"} err="failed to get container status \"659f3927d58208d8d063b16dd56eb776f46e1acfadeaafcff7f64a691dbb795a\": rpc error: code = NotFound desc = could not find container \"659f3927d58208d8d063b16dd56eb776f46e1acfadeaafcff7f64a691dbb795a\": container with ID starting with 659f3927d58208d8d063b16dd56eb776f46e1acfadeaafcff7f64a691dbb795a not found: ID does not exist" Nov 25 14:58:56 crc kubenswrapper[4731]: I1125 14:58:56.074160 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-vk7l9" Nov 25 14:58:56 crc kubenswrapper[4731]: I1125 14:58:56.819179 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c945221-c3ce-443a-82c2-5eb45a00f65c" path="/var/lib/kubelet/pods/4c945221-c3ce-443a-82c2-5eb45a00f65c/volumes" Nov 25 14:58:56 crc kubenswrapper[4731]: I1125 14:58:56.820004 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" path="/var/lib/kubelet/pods/6c85939e-8d19-4f0a-8ad9-2dd4f29717ca/volumes" Nov 25 14:58:56 crc kubenswrapper[4731]: I1125 14:58:56.820693 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d0d6998-e333-4b4e-93c3-d2d346675f07" path="/var/lib/kubelet/pods/9d0d6998-e333-4b4e-93c3-d2d346675f07/volumes" Nov 25 14:58:56 crc kubenswrapper[4731]: I1125 14:58:56.821887 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b750a03d-5b79-4d0c-b569-674089d31556" path="/var/lib/kubelet/pods/b750a03d-5b79-4d0c-b569-674089d31556/volumes" Nov 25 14:58:56 crc kubenswrapper[4731]: I1125 14:58:56.822501 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb067501-cc83-4cdd-ad52-224774a043c4" path="/var/lib/kubelet/pods/eb067501-cc83-4cdd-ad52-224774a043c4/volumes" Nov 25 14:59:02 crc kubenswrapper[4731]: I1125 14:59:02.520937 4731 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Nov 25 14:59:05 crc kubenswrapper[4731]: I1125 14:59:05.582435 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c2m7w"] Nov 25 14:59:05 crc kubenswrapper[4731]: I1125 14:59:05.582961 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" podUID="f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca" containerName="controller-manager" containerID="cri-o://6cadeaa45c0d6e944ce2eb9e18dc756386698656c0e2862b8c51f1e947a465a6" gracePeriod=30 Nov 25 14:59:05 crc kubenswrapper[4731]: I1125 14:59:05.679340 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns"] Nov 25 14:59:05 crc kubenswrapper[4731]: I1125 14:59:05.679878 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" podUID="afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5" containerName="route-controller-manager" containerID="cri-o://f9bab356c3eef296b4be5dd3c44532bfb94644f6b4913f9c5bc904635b302ed2" gracePeriod=30 Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.067398 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.136576 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w4xm\" (UniqueName: \"kubernetes.io/projected/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-kube-api-access-9w4xm\") pod \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.136666 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-serving-cert\") pod \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.136724 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-client-ca\") pod \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.136751 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-proxy-ca-bundles\") pod \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.136775 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-config\") pod \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\" (UID: \"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca\") " Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.138284 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-config" (OuterVolumeSpecName: "config") pod "f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca" (UID: "f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.141948 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca" (UID: "f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.142132 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-client-ca" (OuterVolumeSpecName: "client-ca") pod "f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca" (UID: "f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.145326 4731 generic.go:334] "Generic (PLEG): container finished" podID="afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5" containerID="f9bab356c3eef296b4be5dd3c44532bfb94644f6b4913f9c5bc904635b302ed2" exitCode=0 Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.145492 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" event={"ID":"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5","Type":"ContainerDied","Data":"f9bab356c3eef296b4be5dd3c44532bfb94644f6b4913f9c5bc904635b302ed2"} Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.147484 4731 generic.go:334] "Generic (PLEG): container finished" podID="f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca" containerID="6cadeaa45c0d6e944ce2eb9e18dc756386698656c0e2862b8c51f1e947a465a6" exitCode=0 Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.147538 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" event={"ID":"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca","Type":"ContainerDied","Data":"6cadeaa45c0d6e944ce2eb9e18dc756386698656c0e2862b8c51f1e947a465a6"} Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.147571 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" event={"ID":"f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca","Type":"ContainerDied","Data":"f055e7a7cf33dc99fd9e67b8bb0d189268e08aa73f09b42b11704d574f269f90"} Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.147588 4731 scope.go:117] "RemoveContainer" containerID="6cadeaa45c0d6e944ce2eb9e18dc756386698656c0e2862b8c51f1e947a465a6" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.147709 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-c2m7w" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.148396 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-kube-api-access-9w4xm" (OuterVolumeSpecName: "kube-api-access-9w4xm") pod "f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca" (UID: "f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca"). InnerVolumeSpecName "kube-api-access-9w4xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.152179 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca" (UID: "f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.181678 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.182051 4731 scope.go:117] "RemoveContainer" containerID="6cadeaa45c0d6e944ce2eb9e18dc756386698656c0e2862b8c51f1e947a465a6" Nov 25 14:59:06 crc kubenswrapper[4731]: E1125 14:59:06.183297 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cadeaa45c0d6e944ce2eb9e18dc756386698656c0e2862b8c51f1e947a465a6\": container with ID starting with 6cadeaa45c0d6e944ce2eb9e18dc756386698656c0e2862b8c51f1e947a465a6 not found: ID does not exist" containerID="6cadeaa45c0d6e944ce2eb9e18dc756386698656c0e2862b8c51f1e947a465a6" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.183341 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cadeaa45c0d6e944ce2eb9e18dc756386698656c0e2862b8c51f1e947a465a6"} err="failed to get container status \"6cadeaa45c0d6e944ce2eb9e18dc756386698656c0e2862b8c51f1e947a465a6\": rpc error: code = NotFound desc = could not find container \"6cadeaa45c0d6e944ce2eb9e18dc756386698656c0e2862b8c51f1e947a465a6\": container with ID starting with 6cadeaa45c0d6e944ce2eb9e18dc756386698656c0e2862b8c51f1e947a465a6 not found: ID does not exist" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.238459 4731 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.238518 4731 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.238533 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.238542 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w4xm\" (UniqueName: \"kubernetes.io/projected/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-kube-api-access-9w4xm\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.238551 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.340056 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-serving-cert\") pod \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\" (UID: \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\") " Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.340263 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-client-ca\") pod \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\" (UID: \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\") " Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.340293 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-config\") pod \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\" (UID: \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\") " Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.340349 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vg7b\" (UniqueName: \"kubernetes.io/projected/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-kube-api-access-6vg7b\") pod \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\" (UID: \"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5\") " Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.341633 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-client-ca" (OuterVolumeSpecName: "client-ca") pod "afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5" (UID: "afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.341965 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-config" (OuterVolumeSpecName: "config") pod "afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5" (UID: "afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.346061 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5" (UID: "afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.346173 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-kube-api-access-6vg7b" (OuterVolumeSpecName: "kube-api-access-6vg7b") pod "afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5" (UID: "afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5"). InnerVolumeSpecName "kube-api-access-6vg7b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.442224 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.442274 4731 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.442287 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.442300 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vg7b\" (UniqueName: \"kubernetes.io/projected/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5-kube-api-access-6vg7b\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.486826 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c2m7w"] Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.491530 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-c2m7w"] Nov 25 14:59:06 crc kubenswrapper[4731]: I1125 14:59:06.820809 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca" path="/var/lib/kubelet/pods/f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca/volumes" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.157237 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.157232 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns" event={"ID":"afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5","Type":"ContainerDied","Data":"cf107b996c9122cecbafa2c6761201f01abcb2c5977968fc1e3d2aac66d212bc"} Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.157427 4731 scope.go:117] "RemoveContainer" containerID="f9bab356c3eef296b4be5dd3c44532bfb94644f6b4913f9c5bc904635b302ed2" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.187975 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns"] Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.196252 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-p4sns"] Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.748704 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b"] Nov 25 14:59:07 crc kubenswrapper[4731]: E1125 14:59:07.749065 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0d6998-e333-4b4e-93c3-d2d346675f07" containerName="extract-utilities" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749089 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0d6998-e333-4b4e-93c3-d2d346675f07" containerName="extract-utilities" Nov 25 14:59:07 crc kubenswrapper[4731]: E1125 14:59:07.749107 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0d6998-e333-4b4e-93c3-d2d346675f07" containerName="registry-server" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749116 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0d6998-e333-4b4e-93c3-d2d346675f07" containerName="registry-server" Nov 25 14:59:07 crc kubenswrapper[4731]: E1125 14:59:07.749126 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" containerName="extract-content" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749133 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" containerName="extract-content" Nov 25 14:59:07 crc kubenswrapper[4731]: E1125 14:59:07.749144 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca" containerName="controller-manager" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749152 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca" containerName="controller-manager" Nov 25 14:59:07 crc kubenswrapper[4731]: E1125 14:59:07.749164 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c945221-c3ce-443a-82c2-5eb45a00f65c" containerName="extract-content" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749171 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c945221-c3ce-443a-82c2-5eb45a00f65c" containerName="extract-content" Nov 25 14:59:07 crc kubenswrapper[4731]: E1125 14:59:07.749182 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" containerName="registry-server" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749189 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" containerName="registry-server" Nov 25 14:59:07 crc kubenswrapper[4731]: E1125 14:59:07.749200 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d0d6998-e333-4b4e-93c3-d2d346675f07" containerName="extract-content" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749207 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d0d6998-e333-4b4e-93c3-d2d346675f07" containerName="extract-content" Nov 25 14:59:07 crc kubenswrapper[4731]: E1125 14:59:07.749220 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c945221-c3ce-443a-82c2-5eb45a00f65c" containerName="extract-utilities" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749227 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c945221-c3ce-443a-82c2-5eb45a00f65c" containerName="extract-utilities" Nov 25 14:59:07 crc kubenswrapper[4731]: E1125 14:59:07.749235 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b750a03d-5b79-4d0c-b569-674089d31556" containerName="registry-server" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749243 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b750a03d-5b79-4d0c-b569-674089d31556" containerName="registry-server" Nov 25 14:59:07 crc kubenswrapper[4731]: E1125 14:59:07.749255 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb067501-cc83-4cdd-ad52-224774a043c4" containerName="marketplace-operator" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749265 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb067501-cc83-4cdd-ad52-224774a043c4" containerName="marketplace-operator" Nov 25 14:59:07 crc kubenswrapper[4731]: E1125 14:59:07.749275 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b750a03d-5b79-4d0c-b569-674089d31556" containerName="extract-utilities" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749282 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b750a03d-5b79-4d0c-b569-674089d31556" containerName="extract-utilities" Nov 25 14:59:07 crc kubenswrapper[4731]: E1125 14:59:07.749295 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c945221-c3ce-443a-82c2-5eb45a00f65c" containerName="registry-server" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749302 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c945221-c3ce-443a-82c2-5eb45a00f65c" containerName="registry-server" Nov 25 14:59:07 crc kubenswrapper[4731]: E1125 14:59:07.749314 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b750a03d-5b79-4d0c-b569-674089d31556" containerName="extract-content" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749321 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b750a03d-5b79-4d0c-b569-674089d31556" containerName="extract-content" Nov 25 14:59:07 crc kubenswrapper[4731]: E1125 14:59:07.749330 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" containerName="extract-utilities" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749338 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" containerName="extract-utilities" Nov 25 14:59:07 crc kubenswrapper[4731]: E1125 14:59:07.749347 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5" containerName="route-controller-manager" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749354 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5" containerName="route-controller-manager" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749465 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c945221-c3ce-443a-82c2-5eb45a00f65c" containerName="registry-server" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749479 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c85939e-8d19-4f0a-8ad9-2dd4f29717ca" containerName="registry-server" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749489 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d0d6998-e333-4b4e-93c3-d2d346675f07" containerName="registry-server" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749504 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb067501-cc83-4cdd-ad52-224774a043c4" containerName="marketplace-operator" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749514 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7852fce-1e0b-46ca-9c1d-9f4fe3af3fca" containerName="controller-manager" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749523 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5" containerName="route-controller-manager" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.749531 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b750a03d-5b79-4d0c-b569-674089d31556" containerName="registry-server" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.750082 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.751684 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.752408 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.753298 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.753570 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-67c9f74866-xzn5h"] Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.753737 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.754406 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.754488 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.754651 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.758425 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.758726 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.759253 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.759466 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.759589 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.759809 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.764551 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.765941 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67c9f74866-xzn5h"] Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.771343 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b"] Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.861487 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-serving-cert\") pod \"controller-manager-67c9f74866-xzn5h\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.861560 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7df3f5fc-d372-490e-8d41-114577a84e58-serving-cert\") pod \"route-controller-manager-7c7f6d8788-zhv2b\" (UID: \"7df3f5fc-d372-490e-8d41-114577a84e58\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.861627 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7df3f5fc-d372-490e-8d41-114577a84e58-config\") pod \"route-controller-manager-7c7f6d8788-zhv2b\" (UID: \"7df3f5fc-d372-490e-8d41-114577a84e58\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.861680 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-client-ca\") pod \"controller-manager-67c9f74866-xzn5h\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.861725 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-config\") pod \"controller-manager-67c9f74866-xzn5h\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.861767 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk67f\" (UniqueName: \"kubernetes.io/projected/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-kube-api-access-wk67f\") pod \"controller-manager-67c9f74866-xzn5h\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.861796 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7df3f5fc-d372-490e-8d41-114577a84e58-client-ca\") pod \"route-controller-manager-7c7f6d8788-zhv2b\" (UID: \"7df3f5fc-d372-490e-8d41-114577a84e58\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.861851 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9j6q\" (UniqueName: \"kubernetes.io/projected/7df3f5fc-d372-490e-8d41-114577a84e58-kube-api-access-c9j6q\") pod \"route-controller-manager-7c7f6d8788-zhv2b\" (UID: \"7df3f5fc-d372-490e-8d41-114577a84e58\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.861954 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-proxy-ca-bundles\") pod \"controller-manager-67c9f74866-xzn5h\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.963168 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9j6q\" (UniqueName: \"kubernetes.io/projected/7df3f5fc-d372-490e-8d41-114577a84e58-kube-api-access-c9j6q\") pod \"route-controller-manager-7c7f6d8788-zhv2b\" (UID: \"7df3f5fc-d372-490e-8d41-114577a84e58\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.963562 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-proxy-ca-bundles\") pod \"controller-manager-67c9f74866-xzn5h\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.963673 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-serving-cert\") pod \"controller-manager-67c9f74866-xzn5h\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.963795 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7df3f5fc-d372-490e-8d41-114577a84e58-serving-cert\") pod \"route-controller-manager-7c7f6d8788-zhv2b\" (UID: \"7df3f5fc-d372-490e-8d41-114577a84e58\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.963924 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7df3f5fc-d372-490e-8d41-114577a84e58-config\") pod \"route-controller-manager-7c7f6d8788-zhv2b\" (UID: \"7df3f5fc-d372-490e-8d41-114577a84e58\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.964017 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-client-ca\") pod \"controller-manager-67c9f74866-xzn5h\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.964102 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-config\") pod \"controller-manager-67c9f74866-xzn5h\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.964549 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk67f\" (UniqueName: \"kubernetes.io/projected/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-kube-api-access-wk67f\") pod \"controller-manager-67c9f74866-xzn5h\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.964595 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7df3f5fc-d372-490e-8d41-114577a84e58-client-ca\") pod \"route-controller-manager-7c7f6d8788-zhv2b\" (UID: \"7df3f5fc-d372-490e-8d41-114577a84e58\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.965764 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7df3f5fc-d372-490e-8d41-114577a84e58-client-ca\") pod \"route-controller-manager-7c7f6d8788-zhv2b\" (UID: \"7df3f5fc-d372-490e-8d41-114577a84e58\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.965767 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-client-ca\") pod \"controller-manager-67c9f74866-xzn5h\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.965793 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-proxy-ca-bundles\") pod \"controller-manager-67c9f74866-xzn5h\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.967307 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7df3f5fc-d372-490e-8d41-114577a84e58-config\") pod \"route-controller-manager-7c7f6d8788-zhv2b\" (UID: \"7df3f5fc-d372-490e-8d41-114577a84e58\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.967323 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-config\") pod \"controller-manager-67c9f74866-xzn5h\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.972294 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7df3f5fc-d372-490e-8d41-114577a84e58-serving-cert\") pod \"route-controller-manager-7c7f6d8788-zhv2b\" (UID: \"7df3f5fc-d372-490e-8d41-114577a84e58\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.972851 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-serving-cert\") pod \"controller-manager-67c9f74866-xzn5h\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:07 crc kubenswrapper[4731]: I1125 14:59:07.991799 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9j6q\" (UniqueName: \"kubernetes.io/projected/7df3f5fc-d372-490e-8d41-114577a84e58-kube-api-access-c9j6q\") pod \"route-controller-manager-7c7f6d8788-zhv2b\" (UID: \"7df3f5fc-d372-490e-8d41-114577a84e58\") " pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:08 crc kubenswrapper[4731]: I1125 14:59:08.004067 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk67f\" (UniqueName: \"kubernetes.io/projected/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-kube-api-access-wk67f\") pod \"controller-manager-67c9f74866-xzn5h\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:08 crc kubenswrapper[4731]: I1125 14:59:08.080683 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:08 crc kubenswrapper[4731]: I1125 14:59:08.089604 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:08 crc kubenswrapper[4731]: I1125 14:59:08.320974 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b"] Nov 25 14:59:08 crc kubenswrapper[4731]: W1125 14:59:08.325976 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7df3f5fc_d372_490e_8d41_114577a84e58.slice/crio-6a5cba685964947a2a2a28cef40d4f8d5739150950947158767471cfd8159cde WatchSource:0}: Error finding container 6a5cba685964947a2a2a28cef40d4f8d5739150950947158767471cfd8159cde: Status 404 returned error can't find the container with id 6a5cba685964947a2a2a28cef40d4f8d5739150950947158767471cfd8159cde Nov 25 14:59:08 crc kubenswrapper[4731]: I1125 14:59:08.371777 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-67c9f74866-xzn5h"] Nov 25 14:59:08 crc kubenswrapper[4731]: W1125 14:59:08.377724 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53cebe09_cd04_4c8b_a567_aaa71a0f0dba.slice/crio-d3825783b01b22465090ab83fb403c1da3550cc10ebf485c3022f58b22bcfbc4 WatchSource:0}: Error finding container d3825783b01b22465090ab83fb403c1da3550cc10ebf485c3022f58b22bcfbc4: Status 404 returned error can't find the container with id d3825783b01b22465090ab83fb403c1da3550cc10ebf485c3022f58b22bcfbc4 Nov 25 14:59:08 crc kubenswrapper[4731]: I1125 14:59:08.821309 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5" path="/var/lib/kubelet/pods/afa66c0d-1a2a-46c6-a2d7-b8a883ca0ec5/volumes" Nov 25 14:59:09 crc kubenswrapper[4731]: I1125 14:59:09.214683 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" event={"ID":"7df3f5fc-d372-490e-8d41-114577a84e58","Type":"ContainerStarted","Data":"b88366f954cacb66ebdb929ca6566b10d4b37482bda4a282aa86aedca1f655d4"} Nov 25 14:59:09 crc kubenswrapper[4731]: I1125 14:59:09.214743 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" event={"ID":"7df3f5fc-d372-490e-8d41-114577a84e58","Type":"ContainerStarted","Data":"6a5cba685964947a2a2a28cef40d4f8d5739150950947158767471cfd8159cde"} Nov 25 14:59:09 crc kubenswrapper[4731]: I1125 14:59:09.215567 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:09 crc kubenswrapper[4731]: I1125 14:59:09.217289 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" event={"ID":"53cebe09-cd04-4c8b-a567-aaa71a0f0dba","Type":"ContainerStarted","Data":"6f87178a2863c4fe153cc6474bdbcb6e1d46c2dbcc0b9c75a439e5cfaf6abf35"} Nov 25 14:59:09 crc kubenswrapper[4731]: I1125 14:59:09.217346 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" event={"ID":"53cebe09-cd04-4c8b-a567-aaa71a0f0dba","Type":"ContainerStarted","Data":"d3825783b01b22465090ab83fb403c1da3550cc10ebf485c3022f58b22bcfbc4"} Nov 25 14:59:09 crc kubenswrapper[4731]: I1125 14:59:09.217538 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:09 crc kubenswrapper[4731]: I1125 14:59:09.226810 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:09 crc kubenswrapper[4731]: I1125 14:59:09.236599 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:09 crc kubenswrapper[4731]: I1125 14:59:09.237736 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" podStartSLOduration=4.237712904 podStartE2EDuration="4.237712904s" podCreationTimestamp="2025-11-25 14:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:59:09.234393439 +0000 UTC m=+307.981527869" watchObservedRunningTime="2025-11-25 14:59:09.237712904 +0000 UTC m=+307.984847334" Nov 25 14:59:09 crc kubenswrapper[4731]: I1125 14:59:09.265684 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" podStartSLOduration=4.265654599 podStartE2EDuration="4.265654599s" podCreationTimestamp="2025-11-25 14:59:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:59:09.262607061 +0000 UTC m=+308.009741501" watchObservedRunningTime="2025-11-25 14:59:09.265654599 +0000 UTC m=+308.012789029" Nov 25 14:59:25 crc kubenswrapper[4731]: I1125 14:59:25.555329 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67c9f74866-xzn5h"] Nov 25 14:59:25 crc kubenswrapper[4731]: I1125 14:59:25.556059 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" podUID="53cebe09-cd04-4c8b-a567-aaa71a0f0dba" containerName="controller-manager" containerID="cri-o://6f87178a2863c4fe153cc6474bdbcb6e1d46c2dbcc0b9c75a439e5cfaf6abf35" gracePeriod=30 Nov 25 14:59:25 crc kubenswrapper[4731]: I1125 14:59:25.568231 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b"] Nov 25 14:59:25 crc kubenswrapper[4731]: I1125 14:59:25.568775 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" podUID="7df3f5fc-d372-490e-8d41-114577a84e58" containerName="route-controller-manager" containerID="cri-o://b88366f954cacb66ebdb929ca6566b10d4b37482bda4a282aa86aedca1f655d4" gracePeriod=30 Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.100067 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.158400 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7df3f5fc-d372-490e-8d41-114577a84e58-config\") pod \"7df3f5fc-d372-490e-8d41-114577a84e58\" (UID: \"7df3f5fc-d372-490e-8d41-114577a84e58\") " Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.158480 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7df3f5fc-d372-490e-8d41-114577a84e58-client-ca\") pod \"7df3f5fc-d372-490e-8d41-114577a84e58\" (UID: \"7df3f5fc-d372-490e-8d41-114577a84e58\") " Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.160073 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df3f5fc-d372-490e-8d41-114577a84e58-client-ca" (OuterVolumeSpecName: "client-ca") pod "7df3f5fc-d372-490e-8d41-114577a84e58" (UID: "7df3f5fc-d372-490e-8d41-114577a84e58"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.160493 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df3f5fc-d372-490e-8d41-114577a84e58-config" (OuterVolumeSpecName: "config") pod "7df3f5fc-d372-490e-8d41-114577a84e58" (UID: "7df3f5fc-d372-490e-8d41-114577a84e58"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.259761 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9j6q\" (UniqueName: \"kubernetes.io/projected/7df3f5fc-d372-490e-8d41-114577a84e58-kube-api-access-c9j6q\") pod \"7df3f5fc-d372-490e-8d41-114577a84e58\" (UID: \"7df3f5fc-d372-490e-8d41-114577a84e58\") " Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.260371 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7df3f5fc-d372-490e-8d41-114577a84e58-serving-cert\") pod \"7df3f5fc-d372-490e-8d41-114577a84e58\" (UID: \"7df3f5fc-d372-490e-8d41-114577a84e58\") " Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.260672 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7df3f5fc-d372-490e-8d41-114577a84e58-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.260717 4731 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7df3f5fc-d372-490e-8d41-114577a84e58-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.266103 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df3f5fc-d372-490e-8d41-114577a84e58-kube-api-access-c9j6q" (OuterVolumeSpecName: "kube-api-access-c9j6q") pod "7df3f5fc-d372-490e-8d41-114577a84e58" (UID: "7df3f5fc-d372-490e-8d41-114577a84e58"). InnerVolumeSpecName "kube-api-access-c9j6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.266516 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df3f5fc-d372-490e-8d41-114577a84e58-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7df3f5fc-d372-490e-8d41-114577a84e58" (UID: "7df3f5fc-d372-490e-8d41-114577a84e58"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.266980 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.361074 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-config\") pod \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.361140 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-serving-cert\") pod \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.361169 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-proxy-ca-bundles\") pod \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.361386 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7df3f5fc-d372-490e-8d41-114577a84e58-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.361405 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9j6q\" (UniqueName: \"kubernetes.io/projected/7df3f5fc-d372-490e-8d41-114577a84e58-kube-api-access-c9j6q\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.362412 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "53cebe09-cd04-4c8b-a567-aaa71a0f0dba" (UID: "53cebe09-cd04-4c8b-a567-aaa71a0f0dba"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.362601 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-config" (OuterVolumeSpecName: "config") pod "53cebe09-cd04-4c8b-a567-aaa71a0f0dba" (UID: "53cebe09-cd04-4c8b-a567-aaa71a0f0dba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.365898 4731 generic.go:334] "Generic (PLEG): container finished" podID="7df3f5fc-d372-490e-8d41-114577a84e58" containerID="b88366f954cacb66ebdb929ca6566b10d4b37482bda4a282aa86aedca1f655d4" exitCode=0 Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.366192 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" event={"ID":"7df3f5fc-d372-490e-8d41-114577a84e58","Type":"ContainerDied","Data":"b88366f954cacb66ebdb929ca6566b10d4b37482bda4a282aa86aedca1f655d4"} Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.366290 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" event={"ID":"7df3f5fc-d372-490e-8d41-114577a84e58","Type":"ContainerDied","Data":"6a5cba685964947a2a2a28cef40d4f8d5739150950947158767471cfd8159cde"} Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.366369 4731 scope.go:117] "RemoveContainer" containerID="b88366f954cacb66ebdb929ca6566b10d4b37482bda4a282aa86aedca1f655d4" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.366555 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.366675 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "53cebe09-cd04-4c8b-a567-aaa71a0f0dba" (UID: "53cebe09-cd04-4c8b-a567-aaa71a0f0dba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.369551 4731 generic.go:334] "Generic (PLEG): container finished" podID="53cebe09-cd04-4c8b-a567-aaa71a0f0dba" containerID="6f87178a2863c4fe153cc6474bdbcb6e1d46c2dbcc0b9c75a439e5cfaf6abf35" exitCode=0 Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.369601 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.369622 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" event={"ID":"53cebe09-cd04-4c8b-a567-aaa71a0f0dba","Type":"ContainerDied","Data":"6f87178a2863c4fe153cc6474bdbcb6e1d46c2dbcc0b9c75a439e5cfaf6abf35"} Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.369665 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-67c9f74866-xzn5h" event={"ID":"53cebe09-cd04-4c8b-a567-aaa71a0f0dba","Type":"ContainerDied","Data":"d3825783b01b22465090ab83fb403c1da3550cc10ebf485c3022f58b22bcfbc4"} Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.397134 4731 scope.go:117] "RemoveContainer" containerID="b88366f954cacb66ebdb929ca6566b10d4b37482bda4a282aa86aedca1f655d4" Nov 25 14:59:26 crc kubenswrapper[4731]: E1125 14:59:26.398019 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b88366f954cacb66ebdb929ca6566b10d4b37482bda4a282aa86aedca1f655d4\": container with ID starting with b88366f954cacb66ebdb929ca6566b10d4b37482bda4a282aa86aedca1f655d4 not found: ID does not exist" containerID="b88366f954cacb66ebdb929ca6566b10d4b37482bda4a282aa86aedca1f655d4" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.398363 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b88366f954cacb66ebdb929ca6566b10d4b37482bda4a282aa86aedca1f655d4"} err="failed to get container status \"b88366f954cacb66ebdb929ca6566b10d4b37482bda4a282aa86aedca1f655d4\": rpc error: code = NotFound desc = could not find container \"b88366f954cacb66ebdb929ca6566b10d4b37482bda4a282aa86aedca1f655d4\": container with ID starting with b88366f954cacb66ebdb929ca6566b10d4b37482bda4a282aa86aedca1f655d4 not found: ID does not exist" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.398504 4731 scope.go:117] "RemoveContainer" containerID="6f87178a2863c4fe153cc6474bdbcb6e1d46c2dbcc0b9c75a439e5cfaf6abf35" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.408323 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b"] Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.414602 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c7f6d8788-zhv2b"] Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.420135 4731 scope.go:117] "RemoveContainer" containerID="6f87178a2863c4fe153cc6474bdbcb6e1d46c2dbcc0b9c75a439e5cfaf6abf35" Nov 25 14:59:26 crc kubenswrapper[4731]: E1125 14:59:26.420918 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f87178a2863c4fe153cc6474bdbcb6e1d46c2dbcc0b9c75a439e5cfaf6abf35\": container with ID starting with 6f87178a2863c4fe153cc6474bdbcb6e1d46c2dbcc0b9c75a439e5cfaf6abf35 not found: ID does not exist" containerID="6f87178a2863c4fe153cc6474bdbcb6e1d46c2dbcc0b9c75a439e5cfaf6abf35" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.420978 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f87178a2863c4fe153cc6474bdbcb6e1d46c2dbcc0b9c75a439e5cfaf6abf35"} err="failed to get container status \"6f87178a2863c4fe153cc6474bdbcb6e1d46c2dbcc0b9c75a439e5cfaf6abf35\": rpc error: code = NotFound desc = could not find container \"6f87178a2863c4fe153cc6474bdbcb6e1d46c2dbcc0b9c75a439e5cfaf6abf35\": container with ID starting with 6f87178a2863c4fe153cc6474bdbcb6e1d46c2dbcc0b9c75a439e5cfaf6abf35 not found: ID does not exist" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.462453 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-client-ca\") pod \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.462549 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wk67f\" (UniqueName: \"kubernetes.io/projected/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-kube-api-access-wk67f\") pod \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\" (UID: \"53cebe09-cd04-4c8b-a567-aaa71a0f0dba\") " Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.462790 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-config\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.462832 4731 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.462847 4731 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.463211 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-client-ca" (OuterVolumeSpecName: "client-ca") pod "53cebe09-cd04-4c8b-a567-aaa71a0f0dba" (UID: "53cebe09-cd04-4c8b-a567-aaa71a0f0dba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.467024 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-kube-api-access-wk67f" (OuterVolumeSpecName: "kube-api-access-wk67f") pod "53cebe09-cd04-4c8b-a567-aaa71a0f0dba" (UID: "53cebe09-cd04-4c8b-a567-aaa71a0f0dba"). InnerVolumeSpecName "kube-api-access-wk67f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.563534 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wk67f\" (UniqueName: \"kubernetes.io/projected/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-kube-api-access-wk67f\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.563581 4731 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/53cebe09-cd04-4c8b-a567-aaa71a0f0dba-client-ca\") on node \"crc\" DevicePath \"\"" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.706769 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-67c9f74866-xzn5h"] Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.713488 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-67c9f74866-xzn5h"] Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.767235 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c84f94c7d-mkk7b"] Nov 25 14:59:26 crc kubenswrapper[4731]: E1125 14:59:26.767642 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7df3f5fc-d372-490e-8d41-114577a84e58" containerName="route-controller-manager" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.767664 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7df3f5fc-d372-490e-8d41-114577a84e58" containerName="route-controller-manager" Nov 25 14:59:26 crc kubenswrapper[4731]: E1125 14:59:26.767677 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53cebe09-cd04-4c8b-a567-aaa71a0f0dba" containerName="controller-manager" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.767688 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="53cebe09-cd04-4c8b-a567-aaa71a0f0dba" containerName="controller-manager" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.767853 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="53cebe09-cd04-4c8b-a567-aaa71a0f0dba" containerName="controller-manager" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.767880 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7df3f5fc-d372-490e-8d41-114577a84e58" containerName="route-controller-manager" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.768548 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.771393 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.771803 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.771862 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.772338 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.772966 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.773289 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.778691 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.781730 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7"] Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.782940 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.786412 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.786460 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.786688 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.786925 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.787068 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.787216 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.793343 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c84f94c7d-mkk7b"] Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.804149 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7"] Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.817215 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53cebe09-cd04-4c8b-a567-aaa71a0f0dba" path="/var/lib/kubelet/pods/53cebe09-cd04-4c8b-a567-aaa71a0f0dba/volumes" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.818633 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df3f5fc-d372-490e-8d41-114577a84e58" path="/var/lib/kubelet/pods/7df3f5fc-d372-490e-8d41-114577a84e58/volumes" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.968267 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbe0c4a1-79bc-4bd5-88ec-71667c18e122-serving-cert\") pod \"controller-manager-c84f94c7d-mkk7b\" (UID: \"bbe0c4a1-79bc-4bd5-88ec-71667c18e122\") " pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.968358 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dec03050-f7c1-414d-8ce1-942b88147708-serving-cert\") pod \"route-controller-manager-567f5f9544-vvvz7\" (UID: \"dec03050-f7c1-414d-8ce1-942b88147708\") " pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.968636 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe0c4a1-79bc-4bd5-88ec-71667c18e122-config\") pod \"controller-manager-c84f94c7d-mkk7b\" (UID: \"bbe0c4a1-79bc-4bd5-88ec-71667c18e122\") " pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.968713 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh94m\" (UniqueName: \"kubernetes.io/projected/dec03050-f7c1-414d-8ce1-942b88147708-kube-api-access-lh94m\") pod \"route-controller-manager-567f5f9544-vvvz7\" (UID: \"dec03050-f7c1-414d-8ce1-942b88147708\") " pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.968782 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbe0c4a1-79bc-4bd5-88ec-71667c18e122-client-ca\") pod \"controller-manager-c84f94c7d-mkk7b\" (UID: \"bbe0c4a1-79bc-4bd5-88ec-71667c18e122\") " pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.968857 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dec03050-f7c1-414d-8ce1-942b88147708-config\") pod \"route-controller-manager-567f5f9544-vvvz7\" (UID: \"dec03050-f7c1-414d-8ce1-942b88147708\") " pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.968903 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbf2b\" (UniqueName: \"kubernetes.io/projected/bbe0c4a1-79bc-4bd5-88ec-71667c18e122-kube-api-access-pbf2b\") pod \"controller-manager-c84f94c7d-mkk7b\" (UID: \"bbe0c4a1-79bc-4bd5-88ec-71667c18e122\") " pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.968939 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dec03050-f7c1-414d-8ce1-942b88147708-client-ca\") pod \"route-controller-manager-567f5f9544-vvvz7\" (UID: \"dec03050-f7c1-414d-8ce1-942b88147708\") " pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" Nov 25 14:59:26 crc kubenswrapper[4731]: I1125 14:59:26.968969 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbe0c4a1-79bc-4bd5-88ec-71667c18e122-proxy-ca-bundles\") pod \"controller-manager-c84f94c7d-mkk7b\" (UID: \"bbe0c4a1-79bc-4bd5-88ec-71667c18e122\") " pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.070083 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe0c4a1-79bc-4bd5-88ec-71667c18e122-config\") pod \"controller-manager-c84f94c7d-mkk7b\" (UID: \"bbe0c4a1-79bc-4bd5-88ec-71667c18e122\") " pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.070133 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh94m\" (UniqueName: \"kubernetes.io/projected/dec03050-f7c1-414d-8ce1-942b88147708-kube-api-access-lh94m\") pod \"route-controller-manager-567f5f9544-vvvz7\" (UID: \"dec03050-f7c1-414d-8ce1-942b88147708\") " pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.070206 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbe0c4a1-79bc-4bd5-88ec-71667c18e122-client-ca\") pod \"controller-manager-c84f94c7d-mkk7b\" (UID: \"bbe0c4a1-79bc-4bd5-88ec-71667c18e122\") " pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.070235 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dec03050-f7c1-414d-8ce1-942b88147708-config\") pod \"route-controller-manager-567f5f9544-vvvz7\" (UID: \"dec03050-f7c1-414d-8ce1-942b88147708\") " pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.070259 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbf2b\" (UniqueName: \"kubernetes.io/projected/bbe0c4a1-79bc-4bd5-88ec-71667c18e122-kube-api-access-pbf2b\") pod \"controller-manager-c84f94c7d-mkk7b\" (UID: \"bbe0c4a1-79bc-4bd5-88ec-71667c18e122\") " pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.070274 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dec03050-f7c1-414d-8ce1-942b88147708-client-ca\") pod \"route-controller-manager-567f5f9544-vvvz7\" (UID: \"dec03050-f7c1-414d-8ce1-942b88147708\") " pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.070304 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbe0c4a1-79bc-4bd5-88ec-71667c18e122-proxy-ca-bundles\") pod \"controller-manager-c84f94c7d-mkk7b\" (UID: \"bbe0c4a1-79bc-4bd5-88ec-71667c18e122\") " pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.070332 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbe0c4a1-79bc-4bd5-88ec-71667c18e122-serving-cert\") pod \"controller-manager-c84f94c7d-mkk7b\" (UID: \"bbe0c4a1-79bc-4bd5-88ec-71667c18e122\") " pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.070354 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dec03050-f7c1-414d-8ce1-942b88147708-serving-cert\") pod \"route-controller-manager-567f5f9544-vvvz7\" (UID: \"dec03050-f7c1-414d-8ce1-942b88147708\") " pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.071865 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bbe0c4a1-79bc-4bd5-88ec-71667c18e122-config\") pod \"controller-manager-c84f94c7d-mkk7b\" (UID: \"bbe0c4a1-79bc-4bd5-88ec-71667c18e122\") " pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.072258 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dec03050-f7c1-414d-8ce1-942b88147708-client-ca\") pod \"route-controller-manager-567f5f9544-vvvz7\" (UID: \"dec03050-f7c1-414d-8ce1-942b88147708\") " pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.072308 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbe0c4a1-79bc-4bd5-88ec-71667c18e122-proxy-ca-bundles\") pod \"controller-manager-c84f94c7d-mkk7b\" (UID: \"bbe0c4a1-79bc-4bd5-88ec-71667c18e122\") " pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.072905 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dec03050-f7c1-414d-8ce1-942b88147708-config\") pod \"route-controller-manager-567f5f9544-vvvz7\" (UID: \"dec03050-f7c1-414d-8ce1-942b88147708\") " pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.074243 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bbe0c4a1-79bc-4bd5-88ec-71667c18e122-client-ca\") pod \"controller-manager-c84f94c7d-mkk7b\" (UID: \"bbe0c4a1-79bc-4bd5-88ec-71667c18e122\") " pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.074471 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dec03050-f7c1-414d-8ce1-942b88147708-serving-cert\") pod \"route-controller-manager-567f5f9544-vvvz7\" (UID: \"dec03050-f7c1-414d-8ce1-942b88147708\") " pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.076442 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bbe0c4a1-79bc-4bd5-88ec-71667c18e122-serving-cert\") pod \"controller-manager-c84f94c7d-mkk7b\" (UID: \"bbe0c4a1-79bc-4bd5-88ec-71667c18e122\") " pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.097447 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh94m\" (UniqueName: \"kubernetes.io/projected/dec03050-f7c1-414d-8ce1-942b88147708-kube-api-access-lh94m\") pod \"route-controller-manager-567f5f9544-vvvz7\" (UID: \"dec03050-f7c1-414d-8ce1-942b88147708\") " pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.102599 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbf2b\" (UniqueName: \"kubernetes.io/projected/bbe0c4a1-79bc-4bd5-88ec-71667c18e122-kube-api-access-pbf2b\") pod \"controller-manager-c84f94c7d-mkk7b\" (UID: \"bbe0c4a1-79bc-4bd5-88ec-71667c18e122\") " pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.103933 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.388350 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.514713 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7"] Nov 25 14:59:27 crc kubenswrapper[4731]: W1125 14:59:27.528471 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddec03050_f7c1_414d_8ce1_942b88147708.slice/crio-f189cb3c7ad317075be2aa947d581634a288f0097b2af58d1b74fd2781caed1f WatchSource:0}: Error finding container f189cb3c7ad317075be2aa947d581634a288f0097b2af58d1b74fd2781caed1f: Status 404 returned error can't find the container with id f189cb3c7ad317075be2aa947d581634a288f0097b2af58d1b74fd2781caed1f Nov 25 14:59:27 crc kubenswrapper[4731]: I1125 14:59:27.814765 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c84f94c7d-mkk7b"] Nov 25 14:59:27 crc kubenswrapper[4731]: W1125 14:59:27.822349 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbbe0c4a1_79bc_4bd5_88ec_71667c18e122.slice/crio-42748fe4f438c23ba096e7570db1cc9dbd32708ba278a4bfb2a3bcbb1ef89249 WatchSource:0}: Error finding container 42748fe4f438c23ba096e7570db1cc9dbd32708ba278a4bfb2a3bcbb1ef89249: Status 404 returned error can't find the container with id 42748fe4f438c23ba096e7570db1cc9dbd32708ba278a4bfb2a3bcbb1ef89249 Nov 25 14:59:28 crc kubenswrapper[4731]: I1125 14:59:28.385843 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" event={"ID":"dec03050-f7c1-414d-8ce1-942b88147708","Type":"ContainerStarted","Data":"9552f6487b75bc4069ae99cd1b686b16de8693129ed559292be03e998df77ee5"} Nov 25 14:59:28 crc kubenswrapper[4731]: I1125 14:59:28.386112 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" event={"ID":"dec03050-f7c1-414d-8ce1-942b88147708","Type":"ContainerStarted","Data":"f189cb3c7ad317075be2aa947d581634a288f0097b2af58d1b74fd2781caed1f"} Nov 25 14:59:28 crc kubenswrapper[4731]: I1125 14:59:28.387724 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" Nov 25 14:59:28 crc kubenswrapper[4731]: I1125 14:59:28.390646 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" event={"ID":"bbe0c4a1-79bc-4bd5-88ec-71667c18e122","Type":"ContainerStarted","Data":"daefc2d8f2d71fb29b3785cf55914778ef50b9be7ee8df56f9e0a1c5de436948"} Nov 25 14:59:28 crc kubenswrapper[4731]: I1125 14:59:28.390922 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" event={"ID":"bbe0c4a1-79bc-4bd5-88ec-71667c18e122","Type":"ContainerStarted","Data":"42748fe4f438c23ba096e7570db1cc9dbd32708ba278a4bfb2a3bcbb1ef89249"} Nov 25 14:59:28 crc kubenswrapper[4731]: I1125 14:59:28.391105 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:28 crc kubenswrapper[4731]: I1125 14:59:28.396143 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" Nov 25 14:59:28 crc kubenswrapper[4731]: I1125 14:59:28.401361 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" Nov 25 14:59:28 crc kubenswrapper[4731]: I1125 14:59:28.409864 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-567f5f9544-vvvz7" podStartSLOduration=3.409843724 podStartE2EDuration="3.409843724s" podCreationTimestamp="2025-11-25 14:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:59:28.408596741 +0000 UTC m=+327.155731181" watchObservedRunningTime="2025-11-25 14:59:28.409843724 +0000 UTC m=+327.156978144" Nov 25 14:59:28 crc kubenswrapper[4731]: I1125 14:59:28.437220 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-c84f94c7d-mkk7b" podStartSLOduration=3.43718641 podStartE2EDuration="3.43718641s" podCreationTimestamp="2025-11-25 14:59:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:59:28.436090341 +0000 UTC m=+327.183224791" watchObservedRunningTime="2025-11-25 14:59:28.43718641 +0000 UTC m=+327.184320850" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.538381 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gq8m6"] Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.540005 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.542592 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.547414 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gq8m6"] Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.558554 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z5c6\" (UniqueName: \"kubernetes.io/projected/3caecded-c24c-4f5f-8083-7dd157b4a52d-kube-api-access-8z5c6\") pod \"redhat-operators-gq8m6\" (UID: \"3caecded-c24c-4f5f-8083-7dd157b4a52d\") " pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.558742 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3caecded-c24c-4f5f-8083-7dd157b4a52d-catalog-content\") pod \"redhat-operators-gq8m6\" (UID: \"3caecded-c24c-4f5f-8083-7dd157b4a52d\") " pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.558797 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3caecded-c24c-4f5f-8083-7dd157b4a52d-utilities\") pod \"redhat-operators-gq8m6\" (UID: \"3caecded-c24c-4f5f-8083-7dd157b4a52d\") " pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.659975 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z5c6\" (UniqueName: \"kubernetes.io/projected/3caecded-c24c-4f5f-8083-7dd157b4a52d-kube-api-access-8z5c6\") pod \"redhat-operators-gq8m6\" (UID: \"3caecded-c24c-4f5f-8083-7dd157b4a52d\") " pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.660123 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3caecded-c24c-4f5f-8083-7dd157b4a52d-catalog-content\") pod \"redhat-operators-gq8m6\" (UID: \"3caecded-c24c-4f5f-8083-7dd157b4a52d\") " pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.660159 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3caecded-c24c-4f5f-8083-7dd157b4a52d-utilities\") pod \"redhat-operators-gq8m6\" (UID: \"3caecded-c24c-4f5f-8083-7dd157b4a52d\") " pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.660697 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3caecded-c24c-4f5f-8083-7dd157b4a52d-utilities\") pod \"redhat-operators-gq8m6\" (UID: \"3caecded-c24c-4f5f-8083-7dd157b4a52d\") " pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.660997 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3caecded-c24c-4f5f-8083-7dd157b4a52d-catalog-content\") pod \"redhat-operators-gq8m6\" (UID: \"3caecded-c24c-4f5f-8083-7dd157b4a52d\") " pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.685794 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z5c6\" (UniqueName: \"kubernetes.io/projected/3caecded-c24c-4f5f-8083-7dd157b4a52d-kube-api-access-8z5c6\") pod \"redhat-operators-gq8m6\" (UID: \"3caecded-c24c-4f5f-8083-7dd157b4a52d\") " pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.735570 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5nlwp"] Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.736836 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5nlwp" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.739115 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.750013 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5nlwp"] Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.862020 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde46057-80c9-49a3-9bab-18d8fea09dc9-utilities\") pod \"redhat-marketplace-5nlwp\" (UID: \"bde46057-80c9-49a3-9bab-18d8fea09dc9\") " pod="openshift-marketplace/redhat-marketplace-5nlwp" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.863201 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpgfn\" (UniqueName: \"kubernetes.io/projected/bde46057-80c9-49a3-9bab-18d8fea09dc9-kube-api-access-fpgfn\") pod \"redhat-marketplace-5nlwp\" (UID: \"bde46057-80c9-49a3-9bab-18d8fea09dc9\") " pod="openshift-marketplace/redhat-marketplace-5nlwp" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.863462 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde46057-80c9-49a3-9bab-18d8fea09dc9-catalog-content\") pod \"redhat-marketplace-5nlwp\" (UID: \"bde46057-80c9-49a3-9bab-18d8fea09dc9\") " pod="openshift-marketplace/redhat-marketplace-5nlwp" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.865625 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.964976 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde46057-80c9-49a3-9bab-18d8fea09dc9-catalog-content\") pod \"redhat-marketplace-5nlwp\" (UID: \"bde46057-80c9-49a3-9bab-18d8fea09dc9\") " pod="openshift-marketplace/redhat-marketplace-5nlwp" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.965074 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde46057-80c9-49a3-9bab-18d8fea09dc9-utilities\") pod \"redhat-marketplace-5nlwp\" (UID: \"bde46057-80c9-49a3-9bab-18d8fea09dc9\") " pod="openshift-marketplace/redhat-marketplace-5nlwp" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.965125 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpgfn\" (UniqueName: \"kubernetes.io/projected/bde46057-80c9-49a3-9bab-18d8fea09dc9-kube-api-access-fpgfn\") pod \"redhat-marketplace-5nlwp\" (UID: \"bde46057-80c9-49a3-9bab-18d8fea09dc9\") " pod="openshift-marketplace/redhat-marketplace-5nlwp" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.965676 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bde46057-80c9-49a3-9bab-18d8fea09dc9-catalog-content\") pod \"redhat-marketplace-5nlwp\" (UID: \"bde46057-80c9-49a3-9bab-18d8fea09dc9\") " pod="openshift-marketplace/redhat-marketplace-5nlwp" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.966145 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bde46057-80c9-49a3-9bab-18d8fea09dc9-utilities\") pod \"redhat-marketplace-5nlwp\" (UID: \"bde46057-80c9-49a3-9bab-18d8fea09dc9\") " pod="openshift-marketplace/redhat-marketplace-5nlwp" Nov 25 14:59:33 crc kubenswrapper[4731]: I1125 14:59:33.995644 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpgfn\" (UniqueName: \"kubernetes.io/projected/bde46057-80c9-49a3-9bab-18d8fea09dc9-kube-api-access-fpgfn\") pod \"redhat-marketplace-5nlwp\" (UID: \"bde46057-80c9-49a3-9bab-18d8fea09dc9\") " pod="openshift-marketplace/redhat-marketplace-5nlwp" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.067373 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5nlwp" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.313406 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gq8m6"] Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.425529 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gq8m6" event={"ID":"3caecded-c24c-4f5f-8083-7dd157b4a52d","Type":"ContainerStarted","Data":"bb2ae99bdeb629e7b57071e392407f5d8e3162ab7ce292b300ecb4cf26f8b486"} Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.529078 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5w57r"] Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.530026 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.545626 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5nlwp"] Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.564286 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5w57r"] Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.675788 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e245bbba-4dd6-431e-afbd-e5048bcc2e83-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.675968 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.676006 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e245bbba-4dd6-431e-afbd-e5048bcc2e83-registry-tls\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.676034 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e245bbba-4dd6-431e-afbd-e5048bcc2e83-registry-certificates\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.676085 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e245bbba-4dd6-431e-afbd-e5048bcc2e83-trusted-ca\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.676121 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b2m8\" (UniqueName: \"kubernetes.io/projected/e245bbba-4dd6-431e-afbd-e5048bcc2e83-kube-api-access-2b2m8\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.676189 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e245bbba-4dd6-431e-afbd-e5048bcc2e83-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.676249 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e245bbba-4dd6-431e-afbd-e5048bcc2e83-bound-sa-token\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.705121 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.777706 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b2m8\" (UniqueName: \"kubernetes.io/projected/e245bbba-4dd6-431e-afbd-e5048bcc2e83-kube-api-access-2b2m8\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.777801 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e245bbba-4dd6-431e-afbd-e5048bcc2e83-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.777891 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e245bbba-4dd6-431e-afbd-e5048bcc2e83-bound-sa-token\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.778552 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e245bbba-4dd6-431e-afbd-e5048bcc2e83-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.778640 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e245bbba-4dd6-431e-afbd-e5048bcc2e83-registry-tls\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.778705 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e245bbba-4dd6-431e-afbd-e5048bcc2e83-registry-certificates\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.778758 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e245bbba-4dd6-431e-afbd-e5048bcc2e83-trusted-ca\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.778946 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e245bbba-4dd6-431e-afbd-e5048bcc2e83-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.779901 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e245bbba-4dd6-431e-afbd-e5048bcc2e83-trusted-ca\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.780864 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e245bbba-4dd6-431e-afbd-e5048bcc2e83-registry-certificates\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.786981 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e245bbba-4dd6-431e-afbd-e5048bcc2e83-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.786997 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e245bbba-4dd6-431e-afbd-e5048bcc2e83-registry-tls\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.795710 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e245bbba-4dd6-431e-afbd-e5048bcc2e83-bound-sa-token\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.799027 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b2m8\" (UniqueName: \"kubernetes.io/projected/e245bbba-4dd6-431e-afbd-e5048bcc2e83-kube-api-access-2b2m8\") pod \"image-registry-66df7c8f76-5w57r\" (UID: \"e245bbba-4dd6-431e-afbd-e5048bcc2e83\") " pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:34 crc kubenswrapper[4731]: I1125 14:59:34.850897 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:35 crc kubenswrapper[4731]: I1125 14:59:35.265786 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5w57r"] Nov 25 14:59:35 crc kubenswrapper[4731]: W1125 14:59:35.272541 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode245bbba_4dd6_431e_afbd_e5048bcc2e83.slice/crio-1c77ac7fb6eeb9982eccf076d2b3ed544269cc5904f50f2187c03446209fe3d0 WatchSource:0}: Error finding container 1c77ac7fb6eeb9982eccf076d2b3ed544269cc5904f50f2187c03446209fe3d0: Status 404 returned error can't find the container with id 1c77ac7fb6eeb9982eccf076d2b3ed544269cc5904f50f2187c03446209fe3d0 Nov 25 14:59:35 crc kubenswrapper[4731]: I1125 14:59:35.434885 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" event={"ID":"e245bbba-4dd6-431e-afbd-e5048bcc2e83","Type":"ContainerStarted","Data":"1c77ac7fb6eeb9982eccf076d2b3ed544269cc5904f50f2187c03446209fe3d0"} Nov 25 14:59:35 crc kubenswrapper[4731]: I1125 14:59:35.438024 4731 generic.go:334] "Generic (PLEG): container finished" podID="bde46057-80c9-49a3-9bab-18d8fea09dc9" containerID="6d36721d127d4e5e5014f90a188ee4674b659893a0427878e00afb5f706fadcf" exitCode=0 Nov 25 14:59:35 crc kubenswrapper[4731]: I1125 14:59:35.438117 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nlwp" event={"ID":"bde46057-80c9-49a3-9bab-18d8fea09dc9","Type":"ContainerDied","Data":"6d36721d127d4e5e5014f90a188ee4674b659893a0427878e00afb5f706fadcf"} Nov 25 14:59:35 crc kubenswrapper[4731]: I1125 14:59:35.438167 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nlwp" event={"ID":"bde46057-80c9-49a3-9bab-18d8fea09dc9","Type":"ContainerStarted","Data":"a9d0234200617f2678d620d25b3ed3f93d822e57562304ab17995e0643481813"} Nov 25 14:59:35 crc kubenswrapper[4731]: I1125 14:59:35.442066 4731 generic.go:334] "Generic (PLEG): container finished" podID="3caecded-c24c-4f5f-8083-7dd157b4a52d" containerID="36d373a0aaa1a06213d3a37bb3b6f6379b3ed76f8630e3a2621e541982418dbd" exitCode=0 Nov 25 14:59:35 crc kubenswrapper[4731]: I1125 14:59:35.442149 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gq8m6" event={"ID":"3caecded-c24c-4f5f-8083-7dd157b4a52d","Type":"ContainerDied","Data":"36d373a0aaa1a06213d3a37bb3b6f6379b3ed76f8630e3a2621e541982418dbd"} Nov 25 14:59:35 crc kubenswrapper[4731]: I1125 14:59:35.937191 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-f4fgc"] Nov 25 14:59:35 crc kubenswrapper[4731]: I1125 14:59:35.938649 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4fgc" Nov 25 14:59:35 crc kubenswrapper[4731]: I1125 14:59:35.941022 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Nov 25 14:59:35 crc kubenswrapper[4731]: I1125 14:59:35.950916 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f4fgc"] Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.097723 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c583d-9a46-4ee6-81c0-21b971aa434c-catalog-content\") pod \"community-operators-f4fgc\" (UID: \"3a5c583d-9a46-4ee6-81c0-21b971aa434c\") " pod="openshift-marketplace/community-operators-f4fgc" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.097782 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsjm8\" (UniqueName: \"kubernetes.io/projected/3a5c583d-9a46-4ee6-81c0-21b971aa434c-kube-api-access-bsjm8\") pod \"community-operators-f4fgc\" (UID: \"3a5c583d-9a46-4ee6-81c0-21b971aa434c\") " pod="openshift-marketplace/community-operators-f4fgc" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.097912 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c583d-9a46-4ee6-81c0-21b971aa434c-utilities\") pod \"community-operators-f4fgc\" (UID: \"3a5c583d-9a46-4ee6-81c0-21b971aa434c\") " pod="openshift-marketplace/community-operators-f4fgc" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.135439 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b5pqt"] Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.136861 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5pqt" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.141515 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.157934 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b5pqt"] Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.199910 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c583d-9a46-4ee6-81c0-21b971aa434c-utilities\") pod \"community-operators-f4fgc\" (UID: \"3a5c583d-9a46-4ee6-81c0-21b971aa434c\") " pod="openshift-marketplace/community-operators-f4fgc" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.199996 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c583d-9a46-4ee6-81c0-21b971aa434c-catalog-content\") pod \"community-operators-f4fgc\" (UID: \"3a5c583d-9a46-4ee6-81c0-21b971aa434c\") " pod="openshift-marketplace/community-operators-f4fgc" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.200035 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsjm8\" (UniqueName: \"kubernetes.io/projected/3a5c583d-9a46-4ee6-81c0-21b971aa434c-kube-api-access-bsjm8\") pod \"community-operators-f4fgc\" (UID: \"3a5c583d-9a46-4ee6-81c0-21b971aa434c\") " pod="openshift-marketplace/community-operators-f4fgc" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.200619 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c583d-9a46-4ee6-81c0-21b971aa434c-utilities\") pod \"community-operators-f4fgc\" (UID: \"3a5c583d-9a46-4ee6-81c0-21b971aa434c\") " pod="openshift-marketplace/community-operators-f4fgc" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.200858 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c583d-9a46-4ee6-81c0-21b971aa434c-catalog-content\") pod \"community-operators-f4fgc\" (UID: \"3a5c583d-9a46-4ee6-81c0-21b971aa434c\") " pod="openshift-marketplace/community-operators-f4fgc" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.221524 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsjm8\" (UniqueName: \"kubernetes.io/projected/3a5c583d-9a46-4ee6-81c0-21b971aa434c-kube-api-access-bsjm8\") pod \"community-operators-f4fgc\" (UID: \"3a5c583d-9a46-4ee6-81c0-21b971aa434c\") " pod="openshift-marketplace/community-operators-f4fgc" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.272944 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4fgc" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.301208 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87a958bc-c1ba-4b0b-903d-b21a84408594-catalog-content\") pod \"certified-operators-b5pqt\" (UID: \"87a958bc-c1ba-4b0b-903d-b21a84408594\") " pod="openshift-marketplace/certified-operators-b5pqt" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.301281 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87a958bc-c1ba-4b0b-903d-b21a84408594-utilities\") pod \"certified-operators-b5pqt\" (UID: \"87a958bc-c1ba-4b0b-903d-b21a84408594\") " pod="openshift-marketplace/certified-operators-b5pqt" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.301349 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2bfj\" (UniqueName: \"kubernetes.io/projected/87a958bc-c1ba-4b0b-903d-b21a84408594-kube-api-access-p2bfj\") pod \"certified-operators-b5pqt\" (UID: \"87a958bc-c1ba-4b0b-903d-b21a84408594\") " pod="openshift-marketplace/certified-operators-b5pqt" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.402970 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87a958bc-c1ba-4b0b-903d-b21a84408594-utilities\") pod \"certified-operators-b5pqt\" (UID: \"87a958bc-c1ba-4b0b-903d-b21a84408594\") " pod="openshift-marketplace/certified-operators-b5pqt" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.403050 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2bfj\" (UniqueName: \"kubernetes.io/projected/87a958bc-c1ba-4b0b-903d-b21a84408594-kube-api-access-p2bfj\") pod \"certified-operators-b5pqt\" (UID: \"87a958bc-c1ba-4b0b-903d-b21a84408594\") " pod="openshift-marketplace/certified-operators-b5pqt" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.403118 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87a958bc-c1ba-4b0b-903d-b21a84408594-catalog-content\") pod \"certified-operators-b5pqt\" (UID: \"87a958bc-c1ba-4b0b-903d-b21a84408594\") " pod="openshift-marketplace/certified-operators-b5pqt" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.403618 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87a958bc-c1ba-4b0b-903d-b21a84408594-utilities\") pod \"certified-operators-b5pqt\" (UID: \"87a958bc-c1ba-4b0b-903d-b21a84408594\") " pod="openshift-marketplace/certified-operators-b5pqt" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.403909 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87a958bc-c1ba-4b0b-903d-b21a84408594-catalog-content\") pod \"certified-operators-b5pqt\" (UID: \"87a958bc-c1ba-4b0b-903d-b21a84408594\") " pod="openshift-marketplace/certified-operators-b5pqt" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.431159 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2bfj\" (UniqueName: \"kubernetes.io/projected/87a958bc-c1ba-4b0b-903d-b21a84408594-kube-api-access-p2bfj\") pod \"certified-operators-b5pqt\" (UID: \"87a958bc-c1ba-4b0b-903d-b21a84408594\") " pod="openshift-marketplace/certified-operators-b5pqt" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.466951 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" event={"ID":"e245bbba-4dd6-431e-afbd-e5048bcc2e83","Type":"ContainerStarted","Data":"5c6fd5f0be7810ae0a33da8e034cf4298db59b3dd45d006fc1dbc299489a0eb8"} Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.467535 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.469838 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nlwp" event={"ID":"bde46057-80c9-49a3-9bab-18d8fea09dc9","Type":"ContainerStarted","Data":"62fa7805a2a7640d3d25228ba0502d937c5678c338c231e8ad6458b636dd5259"} Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.472378 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gq8m6" event={"ID":"3caecded-c24c-4f5f-8083-7dd157b4a52d","Type":"ContainerStarted","Data":"51df46204887a815c75a4aa7b8999e07cca0471655a8af046f7bdf2c1d8e3bd7"} Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.514124 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" podStartSLOduration=2.514090993 podStartE2EDuration="2.514090993s" podCreationTimestamp="2025-11-25 14:59:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 14:59:36.510885047 +0000 UTC m=+335.258019477" watchObservedRunningTime="2025-11-25 14:59:36.514090993 +0000 UTC m=+335.261225423" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.540508 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-f4fgc"] Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.550133 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5pqt" Nov 25 14:59:36 crc kubenswrapper[4731]: I1125 14:59:36.963073 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b5pqt"] Nov 25 14:59:37 crc kubenswrapper[4731]: I1125 14:59:37.480420 4731 generic.go:334] "Generic (PLEG): container finished" podID="87a958bc-c1ba-4b0b-903d-b21a84408594" containerID="13271a82ee1cae631d705a08db2cacff0d77d1fed307f120f3e14193fbaae5ed" exitCode=0 Nov 25 14:59:37 crc kubenswrapper[4731]: I1125 14:59:37.480505 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5pqt" event={"ID":"87a958bc-c1ba-4b0b-903d-b21a84408594","Type":"ContainerDied","Data":"13271a82ee1cae631d705a08db2cacff0d77d1fed307f120f3e14193fbaae5ed"} Nov 25 14:59:37 crc kubenswrapper[4731]: I1125 14:59:37.480549 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5pqt" event={"ID":"87a958bc-c1ba-4b0b-903d-b21a84408594","Type":"ContainerStarted","Data":"3feabd241c2e89e100a50332b1cec0a30c63a6f303b1af7215ff15f5aee38cfd"} Nov 25 14:59:37 crc kubenswrapper[4731]: I1125 14:59:37.485498 4731 generic.go:334] "Generic (PLEG): container finished" podID="bde46057-80c9-49a3-9bab-18d8fea09dc9" containerID="62fa7805a2a7640d3d25228ba0502d937c5678c338c231e8ad6458b636dd5259" exitCode=0 Nov 25 14:59:37 crc kubenswrapper[4731]: I1125 14:59:37.485612 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nlwp" event={"ID":"bde46057-80c9-49a3-9bab-18d8fea09dc9","Type":"ContainerDied","Data":"62fa7805a2a7640d3d25228ba0502d937c5678c338c231e8ad6458b636dd5259"} Nov 25 14:59:37 crc kubenswrapper[4731]: I1125 14:59:37.488877 4731 generic.go:334] "Generic (PLEG): container finished" podID="3caecded-c24c-4f5f-8083-7dd157b4a52d" containerID="51df46204887a815c75a4aa7b8999e07cca0471655a8af046f7bdf2c1d8e3bd7" exitCode=0 Nov 25 14:59:37 crc kubenswrapper[4731]: I1125 14:59:37.489135 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gq8m6" event={"ID":"3caecded-c24c-4f5f-8083-7dd157b4a52d","Type":"ContainerDied","Data":"51df46204887a815c75a4aa7b8999e07cca0471655a8af046f7bdf2c1d8e3bd7"} Nov 25 14:59:37 crc kubenswrapper[4731]: I1125 14:59:37.496674 4731 generic.go:334] "Generic (PLEG): container finished" podID="3a5c583d-9a46-4ee6-81c0-21b971aa434c" containerID="f3a1e5fc87510c67c6a8592cad04a422cd73fedd7ff535768721cb34982af7e4" exitCode=0 Nov 25 14:59:37 crc kubenswrapper[4731]: I1125 14:59:37.496717 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4fgc" event={"ID":"3a5c583d-9a46-4ee6-81c0-21b971aa434c","Type":"ContainerDied","Data":"f3a1e5fc87510c67c6a8592cad04a422cd73fedd7ff535768721cb34982af7e4"} Nov 25 14:59:37 crc kubenswrapper[4731]: I1125 14:59:37.496794 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4fgc" event={"ID":"3a5c583d-9a46-4ee6-81c0-21b971aa434c","Type":"ContainerStarted","Data":"4f3157c87ae4a42388bdb810fa83b35696c5021f757bf545c27f16c739b7288a"} Nov 25 14:59:38 crc kubenswrapper[4731]: I1125 14:59:38.513835 4731 generic.go:334] "Generic (PLEG): container finished" podID="87a958bc-c1ba-4b0b-903d-b21a84408594" containerID="f732114f4081906e6739b8a464a21ecc87a267f5b798bbc36bcc434edbae3587" exitCode=0 Nov 25 14:59:38 crc kubenswrapper[4731]: I1125 14:59:38.513876 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5pqt" event={"ID":"87a958bc-c1ba-4b0b-903d-b21a84408594","Type":"ContainerDied","Data":"f732114f4081906e6739b8a464a21ecc87a267f5b798bbc36bcc434edbae3587"} Nov 25 14:59:38 crc kubenswrapper[4731]: I1125 14:59:38.518562 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5nlwp" event={"ID":"bde46057-80c9-49a3-9bab-18d8fea09dc9","Type":"ContainerStarted","Data":"f7fd2baf00ba903345006d832c4d60c15a4b30fde35e11a70d8ffb9e7b38369c"} Nov 25 14:59:38 crc kubenswrapper[4731]: I1125 14:59:38.524916 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gq8m6" event={"ID":"3caecded-c24c-4f5f-8083-7dd157b4a52d","Type":"ContainerStarted","Data":"3657e5d0cf5363254eac7bc1fc9c141ea4aee505dfca71f128a1a159d3e0eefa"} Nov 25 14:59:38 crc kubenswrapper[4731]: I1125 14:59:38.527293 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4fgc" event={"ID":"3a5c583d-9a46-4ee6-81c0-21b971aa434c","Type":"ContainerStarted","Data":"5910fa899500bf45b0c48ba044dd2a163df87cab6225fbda735b4c522c23fddd"} Nov 25 14:59:38 crc kubenswrapper[4731]: I1125 14:59:38.590847 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5nlwp" podStartSLOduration=3.10894168 podStartE2EDuration="5.59080697s" podCreationTimestamp="2025-11-25 14:59:33 +0000 UTC" firstStartedPulling="2025-11-25 14:59:35.441790626 +0000 UTC m=+334.188925056" lastFinishedPulling="2025-11-25 14:59:37.923655916 +0000 UTC m=+336.670790346" observedRunningTime="2025-11-25 14:59:38.585867198 +0000 UTC m=+337.333001638" watchObservedRunningTime="2025-11-25 14:59:38.59080697 +0000 UTC m=+337.337941400" Nov 25 14:59:38 crc kubenswrapper[4731]: I1125 14:59:38.628729 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gq8m6" podStartSLOduration=2.8269850229999998 podStartE2EDuration="5.628709861s" podCreationTimestamp="2025-11-25 14:59:33 +0000 UTC" firstStartedPulling="2025-11-25 14:59:35.449118173 +0000 UTC m=+334.196252603" lastFinishedPulling="2025-11-25 14:59:38.250843011 +0000 UTC m=+336.997977441" observedRunningTime="2025-11-25 14:59:38.627520509 +0000 UTC m=+337.374654929" watchObservedRunningTime="2025-11-25 14:59:38.628709861 +0000 UTC m=+337.375844291" Nov 25 14:59:39 crc kubenswrapper[4731]: I1125 14:59:39.536238 4731 generic.go:334] "Generic (PLEG): container finished" podID="3a5c583d-9a46-4ee6-81c0-21b971aa434c" containerID="5910fa899500bf45b0c48ba044dd2a163df87cab6225fbda735b4c522c23fddd" exitCode=0 Nov 25 14:59:39 crc kubenswrapper[4731]: I1125 14:59:39.536292 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4fgc" event={"ID":"3a5c583d-9a46-4ee6-81c0-21b971aa434c","Type":"ContainerDied","Data":"5910fa899500bf45b0c48ba044dd2a163df87cab6225fbda735b4c522c23fddd"} Nov 25 14:59:39 crc kubenswrapper[4731]: I1125 14:59:39.542483 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5pqt" event={"ID":"87a958bc-c1ba-4b0b-903d-b21a84408594","Type":"ContainerStarted","Data":"bbb26904cf3beaaa173235c5fb7f4f84a852c217e2542f800895b7e647676646"} Nov 25 14:59:39 crc kubenswrapper[4731]: I1125 14:59:39.579244 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b5pqt" podStartSLOduration=2.151293982 podStartE2EDuration="3.57921526s" podCreationTimestamp="2025-11-25 14:59:36 +0000 UTC" firstStartedPulling="2025-11-25 14:59:37.484495468 +0000 UTC m=+336.231629898" lastFinishedPulling="2025-11-25 14:59:38.912416746 +0000 UTC m=+337.659551176" observedRunningTime="2025-11-25 14:59:39.576412525 +0000 UTC m=+338.323546955" watchObservedRunningTime="2025-11-25 14:59:39.57921526 +0000 UTC m=+338.326349690" Nov 25 14:59:40 crc kubenswrapper[4731]: I1125 14:59:40.553732 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4fgc" event={"ID":"3a5c583d-9a46-4ee6-81c0-21b971aa434c","Type":"ContainerStarted","Data":"d1a16f4e654c4b872d5f632174a6cd7d92c72b88e8b98220a970072618b83a5b"} Nov 25 14:59:40 crc kubenswrapper[4731]: I1125 14:59:40.575832 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-f4fgc" podStartSLOduration=2.904842659 podStartE2EDuration="5.575795779s" podCreationTimestamp="2025-11-25 14:59:35 +0000 UTC" firstStartedPulling="2025-11-25 14:59:37.498555576 +0000 UTC m=+336.245690006" lastFinishedPulling="2025-11-25 14:59:40.169508696 +0000 UTC m=+338.916643126" observedRunningTime="2025-11-25 14:59:40.574265518 +0000 UTC m=+339.321399948" watchObservedRunningTime="2025-11-25 14:59:40.575795779 +0000 UTC m=+339.322930209" Nov 25 14:59:43 crc kubenswrapper[4731]: I1125 14:59:43.866092 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 14:59:43 crc kubenswrapper[4731]: I1125 14:59:43.866214 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 14:59:43 crc kubenswrapper[4731]: I1125 14:59:43.938335 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 14:59:44 crc kubenswrapper[4731]: I1125 14:59:44.068605 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5nlwp" Nov 25 14:59:44 crc kubenswrapper[4731]: I1125 14:59:44.068714 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5nlwp" Nov 25 14:59:44 crc kubenswrapper[4731]: I1125 14:59:44.117596 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5nlwp" Nov 25 14:59:44 crc kubenswrapper[4731]: I1125 14:59:44.654750 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5nlwp" Nov 25 14:59:44 crc kubenswrapper[4731]: I1125 14:59:44.658367 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 14:59:46 crc kubenswrapper[4731]: I1125 14:59:46.274093 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-f4fgc" Nov 25 14:59:46 crc kubenswrapper[4731]: I1125 14:59:46.274414 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-f4fgc" Nov 25 14:59:46 crc kubenswrapper[4731]: I1125 14:59:46.315196 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-f4fgc" Nov 25 14:59:46 crc kubenswrapper[4731]: I1125 14:59:46.550503 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b5pqt" Nov 25 14:59:46 crc kubenswrapper[4731]: I1125 14:59:46.550679 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b5pqt" Nov 25 14:59:46 crc kubenswrapper[4731]: I1125 14:59:46.589470 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b5pqt" Nov 25 14:59:46 crc kubenswrapper[4731]: I1125 14:59:46.650664 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b5pqt" Nov 25 14:59:46 crc kubenswrapper[4731]: I1125 14:59:46.651566 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-f4fgc" Nov 25 14:59:54 crc kubenswrapper[4731]: I1125 14:59:54.856068 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-5w57r" Nov 25 14:59:54 crc kubenswrapper[4731]: I1125 14:59:54.940279 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bbgwg"] Nov 25 15:00:00 crc kubenswrapper[4731]: I1125 15:00:00.193249 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr"] Nov 25 15:00:00 crc kubenswrapper[4731]: I1125 15:00:00.196434 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr" Nov 25 15:00:00 crc kubenswrapper[4731]: I1125 15:00:00.199524 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 15:00:00 crc kubenswrapper[4731]: I1125 15:00:00.200099 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 15:00:00 crc kubenswrapper[4731]: I1125 15:00:00.203694 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr"] Nov 25 15:00:00 crc kubenswrapper[4731]: I1125 15:00:00.340851 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9970e656-62f1-49c3-a6b6-085caa1b9860-secret-volume\") pod \"collect-profiles-29401380-x7gvr\" (UID: \"9970e656-62f1-49c3-a6b6-085caa1b9860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr" Nov 25 15:00:00 crc kubenswrapper[4731]: I1125 15:00:00.340966 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9970e656-62f1-49c3-a6b6-085caa1b9860-config-volume\") pod \"collect-profiles-29401380-x7gvr\" (UID: \"9970e656-62f1-49c3-a6b6-085caa1b9860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr" Nov 25 15:00:00 crc kubenswrapper[4731]: I1125 15:00:00.342298 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c4rc\" (UniqueName: \"kubernetes.io/projected/9970e656-62f1-49c3-a6b6-085caa1b9860-kube-api-access-7c4rc\") pod \"collect-profiles-29401380-x7gvr\" (UID: \"9970e656-62f1-49c3-a6b6-085caa1b9860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr" Nov 25 15:00:00 crc kubenswrapper[4731]: I1125 15:00:00.443552 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9970e656-62f1-49c3-a6b6-085caa1b9860-secret-volume\") pod \"collect-profiles-29401380-x7gvr\" (UID: \"9970e656-62f1-49c3-a6b6-085caa1b9860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr" Nov 25 15:00:00 crc kubenswrapper[4731]: I1125 15:00:00.443925 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9970e656-62f1-49c3-a6b6-085caa1b9860-config-volume\") pod \"collect-profiles-29401380-x7gvr\" (UID: \"9970e656-62f1-49c3-a6b6-085caa1b9860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr" Nov 25 15:00:00 crc kubenswrapper[4731]: I1125 15:00:00.444060 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c4rc\" (UniqueName: \"kubernetes.io/projected/9970e656-62f1-49c3-a6b6-085caa1b9860-kube-api-access-7c4rc\") pod \"collect-profiles-29401380-x7gvr\" (UID: \"9970e656-62f1-49c3-a6b6-085caa1b9860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr" Nov 25 15:00:00 crc kubenswrapper[4731]: I1125 15:00:00.447401 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9970e656-62f1-49c3-a6b6-085caa1b9860-config-volume\") pod \"collect-profiles-29401380-x7gvr\" (UID: \"9970e656-62f1-49c3-a6b6-085caa1b9860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr" Nov 25 15:00:00 crc kubenswrapper[4731]: I1125 15:00:00.453531 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9970e656-62f1-49c3-a6b6-085caa1b9860-secret-volume\") pod \"collect-profiles-29401380-x7gvr\" (UID: \"9970e656-62f1-49c3-a6b6-085caa1b9860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr" Nov 25 15:00:00 crc kubenswrapper[4731]: I1125 15:00:00.465096 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c4rc\" (UniqueName: \"kubernetes.io/projected/9970e656-62f1-49c3-a6b6-085caa1b9860-kube-api-access-7c4rc\") pod \"collect-profiles-29401380-x7gvr\" (UID: \"9970e656-62f1-49c3-a6b6-085caa1b9860\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr" Nov 25 15:00:00 crc kubenswrapper[4731]: I1125 15:00:00.522447 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr" Nov 25 15:00:00 crc kubenswrapper[4731]: I1125 15:00:00.950073 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr"] Nov 25 15:00:00 crc kubenswrapper[4731]: W1125 15:00:00.953939 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9970e656_62f1_49c3_a6b6_085caa1b9860.slice/crio-505dabbace07b85d9647a13d90034bd6838f006c10e93b65dc6ba6ac743407f3 WatchSource:0}: Error finding container 505dabbace07b85d9647a13d90034bd6838f006c10e93b65dc6ba6ac743407f3: Status 404 returned error can't find the container with id 505dabbace07b85d9647a13d90034bd6838f006c10e93b65dc6ba6ac743407f3 Nov 25 15:00:01 crc kubenswrapper[4731]: I1125 15:00:01.731126 4731 generic.go:334] "Generic (PLEG): container finished" podID="9970e656-62f1-49c3-a6b6-085caa1b9860" containerID="391b7e4f3aa1fbfd06417c7457824396f9e385a4423dae4d72a8c1028f703b26" exitCode=0 Nov 25 15:00:01 crc kubenswrapper[4731]: I1125 15:00:01.731212 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr" event={"ID":"9970e656-62f1-49c3-a6b6-085caa1b9860","Type":"ContainerDied","Data":"391b7e4f3aa1fbfd06417c7457824396f9e385a4423dae4d72a8c1028f703b26"} Nov 25 15:00:01 crc kubenswrapper[4731]: I1125 15:00:01.731254 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr" event={"ID":"9970e656-62f1-49c3-a6b6-085caa1b9860","Type":"ContainerStarted","Data":"505dabbace07b85d9647a13d90034bd6838f006c10e93b65dc6ba6ac743407f3"} Nov 25 15:00:03 crc kubenswrapper[4731]: I1125 15:00:03.061962 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr" Nov 25 15:00:03 crc kubenswrapper[4731]: I1125 15:00:03.190399 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9970e656-62f1-49c3-a6b6-085caa1b9860-secret-volume\") pod \"9970e656-62f1-49c3-a6b6-085caa1b9860\" (UID: \"9970e656-62f1-49c3-a6b6-085caa1b9860\") " Nov 25 15:00:03 crc kubenswrapper[4731]: I1125 15:00:03.190509 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4rc\" (UniqueName: \"kubernetes.io/projected/9970e656-62f1-49c3-a6b6-085caa1b9860-kube-api-access-7c4rc\") pod \"9970e656-62f1-49c3-a6b6-085caa1b9860\" (UID: \"9970e656-62f1-49c3-a6b6-085caa1b9860\") " Nov 25 15:00:03 crc kubenswrapper[4731]: I1125 15:00:03.190537 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9970e656-62f1-49c3-a6b6-085caa1b9860-config-volume\") pod \"9970e656-62f1-49c3-a6b6-085caa1b9860\" (UID: \"9970e656-62f1-49c3-a6b6-085caa1b9860\") " Nov 25 15:00:03 crc kubenswrapper[4731]: I1125 15:00:03.191117 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9970e656-62f1-49c3-a6b6-085caa1b9860-config-volume" (OuterVolumeSpecName: "config-volume") pod "9970e656-62f1-49c3-a6b6-085caa1b9860" (UID: "9970e656-62f1-49c3-a6b6-085caa1b9860"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4731]: I1125 15:00:03.195567 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9970e656-62f1-49c3-a6b6-085caa1b9860-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9970e656-62f1-49c3-a6b6-085caa1b9860" (UID: "9970e656-62f1-49c3-a6b6-085caa1b9860"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4731]: I1125 15:00:03.200018 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9970e656-62f1-49c3-a6b6-085caa1b9860-kube-api-access-7c4rc" (OuterVolumeSpecName: "kube-api-access-7c4rc") pod "9970e656-62f1-49c3-a6b6-085caa1b9860" (UID: "9970e656-62f1-49c3-a6b6-085caa1b9860"). InnerVolumeSpecName "kube-api-access-7c4rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:00:03 crc kubenswrapper[4731]: I1125 15:00:03.292388 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4rc\" (UniqueName: \"kubernetes.io/projected/9970e656-62f1-49c3-a6b6-085caa1b9860-kube-api-access-7c4rc\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:03 crc kubenswrapper[4731]: I1125 15:00:03.292435 4731 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9970e656-62f1-49c3-a6b6-085caa1b9860-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:03 crc kubenswrapper[4731]: I1125 15:00:03.292451 4731 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9970e656-62f1-49c3-a6b6-085caa1b9860-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:03 crc kubenswrapper[4731]: I1125 15:00:03.744249 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr" event={"ID":"9970e656-62f1-49c3-a6b6-085caa1b9860","Type":"ContainerDied","Data":"505dabbace07b85d9647a13d90034bd6838f006c10e93b65dc6ba6ac743407f3"} Nov 25 15:00:03 crc kubenswrapper[4731]: I1125 15:00:03.744581 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="505dabbace07b85d9647a13d90034bd6838f006c10e93b65dc6ba6ac743407f3" Nov 25 15:00:03 crc kubenswrapper[4731]: I1125 15:00:03.744327 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr" Nov 25 15:00:19 crc kubenswrapper[4731]: I1125 15:00:19.986076 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" podUID="bb91ed12-623e-4b63-b24d-e47c36f7ef7b" containerName="registry" containerID="cri-o://29a73f562fabe54630074f615e71e9e9ddb06b57c51b2a23f7ac4a10211ae035" gracePeriod=30 Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.040711 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.040803 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.360974 4731 generic.go:334] "Generic (PLEG): container finished" podID="bb91ed12-623e-4b63-b24d-e47c36f7ef7b" containerID="29a73f562fabe54630074f615e71e9e9ddb06b57c51b2a23f7ac4a10211ae035" exitCode=0 Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.361297 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" event={"ID":"bb91ed12-623e-4b63-b24d-e47c36f7ef7b","Type":"ContainerDied","Data":"29a73f562fabe54630074f615e71e9e9ddb06b57c51b2a23f7ac4a10211ae035"} Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.361483 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" event={"ID":"bb91ed12-623e-4b63-b24d-e47c36f7ef7b","Type":"ContainerDied","Data":"9caa92c07f89b3556b46e2a20fb54223e670677f66c8d867e859baf605671505"} Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.361509 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9caa92c07f89b3556b46e2a20fb54223e670677f66c8d867e859baf605671505" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.392263 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.502091 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-ca-trust-extracted\") pod \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.502165 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-bound-sa-token\") pod \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.502205 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-trusted-ca\") pod \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.502256 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48vqj\" (UniqueName: \"kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-kube-api-access-48vqj\") pod \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.502318 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-registry-certificates\") pod \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.502709 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.502780 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-registry-tls\") pod \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.502858 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-installation-pull-secrets\") pod \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\" (UID: \"bb91ed12-623e-4b63-b24d-e47c36f7ef7b\") " Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.504305 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bb91ed12-623e-4b63-b24d-e47c36f7ef7b" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.504630 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "bb91ed12-623e-4b63-b24d-e47c36f7ef7b" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.511316 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "bb91ed12-623e-4b63-b24d-e47c36f7ef7b" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.511893 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-kube-api-access-48vqj" (OuterVolumeSpecName: "kube-api-access-48vqj") pod "bb91ed12-623e-4b63-b24d-e47c36f7ef7b" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b"). InnerVolumeSpecName "kube-api-access-48vqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.513726 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "bb91ed12-623e-4b63-b24d-e47c36f7ef7b" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.514532 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bb91ed12-623e-4b63-b24d-e47c36f7ef7b" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.522060 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "bb91ed12-623e-4b63-b24d-e47c36f7ef7b" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.528350 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "bb91ed12-623e-4b63-b24d-e47c36f7ef7b" (UID: "bb91ed12-623e-4b63-b24d-e47c36f7ef7b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.604456 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48vqj\" (UniqueName: \"kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-kube-api-access-48vqj\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.604506 4731 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-registry-certificates\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.604527 4731 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-registry-tls\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.604542 4731 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.604562 4731 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.604579 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-trusted-ca\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:20 crc kubenswrapper[4731]: I1125 15:00:20.604592 4731 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bb91ed12-623e-4b63-b24d-e47c36f7ef7b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Nov 25 15:00:21 crc kubenswrapper[4731]: I1125 15:00:21.365707 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-bbgwg" Nov 25 15:00:21 crc kubenswrapper[4731]: I1125 15:00:21.392380 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bbgwg"] Nov 25 15:00:21 crc kubenswrapper[4731]: I1125 15:00:21.397372 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-bbgwg"] Nov 25 15:00:22 crc kubenswrapper[4731]: I1125 15:00:22.818612 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb91ed12-623e-4b63-b24d-e47c36f7ef7b" path="/var/lib/kubelet/pods/bb91ed12-623e-4b63-b24d-e47c36f7ef7b/volumes" Nov 25 15:00:50 crc kubenswrapper[4731]: I1125 15:00:50.047325 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:00:50 crc kubenswrapper[4731]: I1125 15:00:50.047910 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:01:20 crc kubenswrapper[4731]: I1125 15:01:20.040426 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:01:20 crc kubenswrapper[4731]: I1125 15:01:20.041916 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:01:20 crc kubenswrapper[4731]: I1125 15:01:20.041980 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 15:01:20 crc kubenswrapper[4731]: I1125 15:01:20.042792 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"084e1849e03ff9c03a065300886ce95d4f2387a6f08620af335a641d5c33dcf2"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:01:20 crc kubenswrapper[4731]: I1125 15:01:20.042913 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://084e1849e03ff9c03a065300886ce95d4f2387a6f08620af335a641d5c33dcf2" gracePeriod=600 Nov 25 15:01:20 crc kubenswrapper[4731]: I1125 15:01:20.727208 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="084e1849e03ff9c03a065300886ce95d4f2387a6f08620af335a641d5c33dcf2" exitCode=0 Nov 25 15:01:20 crc kubenswrapper[4731]: I1125 15:01:20.727261 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"084e1849e03ff9c03a065300886ce95d4f2387a6f08620af335a641d5c33dcf2"} Nov 25 15:01:20 crc kubenswrapper[4731]: I1125 15:01:20.727590 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"243b2f2b733057d38d0d110d07ed2b0ae2ae87e1f75ff6ea45c0133b6a69171d"} Nov 25 15:01:20 crc kubenswrapper[4731]: I1125 15:01:20.727617 4731 scope.go:117] "RemoveContainer" containerID="0f697b93e546d937eb9539906a74383f6a6afc70613575dab2946a4b22f1d63b" Nov 25 15:03:02 crc kubenswrapper[4731]: I1125 15:03:02.990509 4731 scope.go:117] "RemoveContainer" containerID="29a73f562fabe54630074f615e71e9e9ddb06b57c51b2a23f7ac4a10211ae035" Nov 25 15:03:20 crc kubenswrapper[4731]: I1125 15:03:20.040690 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:03:20 crc kubenswrapper[4731]: I1125 15:03:20.041848 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:03:50 crc kubenswrapper[4731]: I1125 15:03:50.040167 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:03:50 crc kubenswrapper[4731]: I1125 15:03:50.040646 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:04:20 crc kubenswrapper[4731]: I1125 15:04:20.040205 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:04:20 crc kubenswrapper[4731]: I1125 15:04:20.041232 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:04:20 crc kubenswrapper[4731]: I1125 15:04:20.041368 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 15:04:20 crc kubenswrapper[4731]: I1125 15:04:20.042189 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"243b2f2b733057d38d0d110d07ed2b0ae2ae87e1f75ff6ea45c0133b6a69171d"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:04:20 crc kubenswrapper[4731]: I1125 15:04:20.042255 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://243b2f2b733057d38d0d110d07ed2b0ae2ae87e1f75ff6ea45c0133b6a69171d" gracePeriod=600 Nov 25 15:04:20 crc kubenswrapper[4731]: I1125 15:04:20.778401 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="243b2f2b733057d38d0d110d07ed2b0ae2ae87e1f75ff6ea45c0133b6a69171d" exitCode=0 Nov 25 15:04:20 crc kubenswrapper[4731]: I1125 15:04:20.778467 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"243b2f2b733057d38d0d110d07ed2b0ae2ae87e1f75ff6ea45c0133b6a69171d"} Nov 25 15:04:20 crc kubenswrapper[4731]: I1125 15:04:20.778889 4731 scope.go:117] "RemoveContainer" containerID="084e1849e03ff9c03a065300886ce95d4f2387a6f08620af335a641d5c33dcf2" Nov 25 15:04:21 crc kubenswrapper[4731]: I1125 15:04:21.790857 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"0455c99facc14bf1fc68ee4a3d36481680ea8d93970fec20b06e0683abb73b7e"} Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.544885 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-z8qcw"] Nov 25 15:05:26 crc kubenswrapper[4731]: E1125 15:05:26.545635 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb91ed12-623e-4b63-b24d-e47c36f7ef7b" containerName="registry" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.545652 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb91ed12-623e-4b63-b24d-e47c36f7ef7b" containerName="registry" Nov 25 15:05:26 crc kubenswrapper[4731]: E1125 15:05:26.545670 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9970e656-62f1-49c3-a6b6-085caa1b9860" containerName="collect-profiles" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.545679 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9970e656-62f1-49c3-a6b6-085caa1b9860" containerName="collect-profiles" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.545797 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb91ed12-623e-4b63-b24d-e47c36f7ef7b" containerName="registry" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.545832 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9970e656-62f1-49c3-a6b6-085caa1b9860" containerName="collect-profiles" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.546326 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-z8qcw" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.549255 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.549572 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.550563 4731 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-kblls" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.551790 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7bwk9"] Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.552731 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-7bwk9" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.558771 4731 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-m42rg" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.580360 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7bwk9"] Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.594040 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-z8qcw"] Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.599930 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk6pl\" (UniqueName: \"kubernetes.io/projected/e6f08a04-49da-431f-8ad5-2669b0395a83-kube-api-access-qk6pl\") pod \"cert-manager-cainjector-7f985d654d-z8qcw\" (UID: \"e6f08a04-49da-431f-8ad5-2669b0395a83\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-z8qcw" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.603001 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-s2z26"] Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.603867 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-s2z26" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.605993 4731 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-dcnzd" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.611308 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-s2z26"] Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.701167 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk6pl\" (UniqueName: \"kubernetes.io/projected/e6f08a04-49da-431f-8ad5-2669b0395a83-kube-api-access-qk6pl\") pod \"cert-manager-cainjector-7f985d654d-z8qcw\" (UID: \"e6f08a04-49da-431f-8ad5-2669b0395a83\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-z8qcw" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.701281 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtfcz\" (UniqueName: \"kubernetes.io/projected/4d05d1b3-fbd6-4efc-83a7-b446881f41a1-kube-api-access-dtfcz\") pod \"cert-manager-5b446d88c5-7bwk9\" (UID: \"4d05d1b3-fbd6-4efc-83a7-b446881f41a1\") " pod="cert-manager/cert-manager-5b446d88c5-7bwk9" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.701320 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs8h2\" (UniqueName: \"kubernetes.io/projected/b8915b48-72ed-4372-bfbf-7f933af258e0-kube-api-access-rs8h2\") pod \"cert-manager-webhook-5655c58dd6-s2z26\" (UID: \"b8915b48-72ed-4372-bfbf-7f933af258e0\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-s2z26" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.732080 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk6pl\" (UniqueName: \"kubernetes.io/projected/e6f08a04-49da-431f-8ad5-2669b0395a83-kube-api-access-qk6pl\") pod \"cert-manager-cainjector-7f985d654d-z8qcw\" (UID: \"e6f08a04-49da-431f-8ad5-2669b0395a83\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-z8qcw" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.803377 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtfcz\" (UniqueName: \"kubernetes.io/projected/4d05d1b3-fbd6-4efc-83a7-b446881f41a1-kube-api-access-dtfcz\") pod \"cert-manager-5b446d88c5-7bwk9\" (UID: \"4d05d1b3-fbd6-4efc-83a7-b446881f41a1\") " pod="cert-manager/cert-manager-5b446d88c5-7bwk9" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.803483 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs8h2\" (UniqueName: \"kubernetes.io/projected/b8915b48-72ed-4372-bfbf-7f933af258e0-kube-api-access-rs8h2\") pod \"cert-manager-webhook-5655c58dd6-s2z26\" (UID: \"b8915b48-72ed-4372-bfbf-7f933af258e0\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-s2z26" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.820596 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs8h2\" (UniqueName: \"kubernetes.io/projected/b8915b48-72ed-4372-bfbf-7f933af258e0-kube-api-access-rs8h2\") pod \"cert-manager-webhook-5655c58dd6-s2z26\" (UID: \"b8915b48-72ed-4372-bfbf-7f933af258e0\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-s2z26" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.821147 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtfcz\" (UniqueName: \"kubernetes.io/projected/4d05d1b3-fbd6-4efc-83a7-b446881f41a1-kube-api-access-dtfcz\") pod \"cert-manager-5b446d88c5-7bwk9\" (UID: \"4d05d1b3-fbd6-4efc-83a7-b446881f41a1\") " pod="cert-manager/cert-manager-5b446d88c5-7bwk9" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.870981 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-z8qcw" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.885848 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-7bwk9" Nov 25 15:05:26 crc kubenswrapper[4731]: I1125 15:05:26.919088 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-s2z26" Nov 25 15:05:27 crc kubenswrapper[4731]: I1125 15:05:27.108257 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-z8qcw"] Nov 25 15:05:27 crc kubenswrapper[4731]: I1125 15:05:27.126312 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:05:27 crc kubenswrapper[4731]: I1125 15:05:27.162675 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-z8qcw" event={"ID":"e6f08a04-49da-431f-8ad5-2669b0395a83","Type":"ContainerStarted","Data":"286be4194a27003eaf7eeb2262fb40923f5259cf4db1f7959354569211da73a0"} Nov 25 15:05:27 crc kubenswrapper[4731]: I1125 15:05:27.164750 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7bwk9"] Nov 25 15:05:27 crc kubenswrapper[4731]: I1125 15:05:27.200970 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-s2z26"] Nov 25 15:05:27 crc kubenswrapper[4731]: W1125 15:05:27.205634 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8915b48_72ed_4372_bfbf_7f933af258e0.slice/crio-708fa62d09969f3aa79a413e0dacb3d2b548740b3d9c9ea86d7fe32de848866b WatchSource:0}: Error finding container 708fa62d09969f3aa79a413e0dacb3d2b548740b3d9c9ea86d7fe32de848866b: Status 404 returned error can't find the container with id 708fa62d09969f3aa79a413e0dacb3d2b548740b3d9c9ea86d7fe32de848866b Nov 25 15:05:28 crc kubenswrapper[4731]: I1125 15:05:28.168392 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-s2z26" event={"ID":"b8915b48-72ed-4372-bfbf-7f933af258e0","Type":"ContainerStarted","Data":"708fa62d09969f3aa79a413e0dacb3d2b548740b3d9c9ea86d7fe32de848866b"} Nov 25 15:05:28 crc kubenswrapper[4731]: I1125 15:05:28.169132 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-7bwk9" event={"ID":"4d05d1b3-fbd6-4efc-83a7-b446881f41a1","Type":"ContainerStarted","Data":"c36b67729acc221fd911205ab9511dfd3ceabfb585c2bd1138efb32bc1d82b69"} Nov 25 15:05:34 crc kubenswrapper[4731]: I1125 15:05:34.208606 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-s2z26" event={"ID":"b8915b48-72ed-4372-bfbf-7f933af258e0","Type":"ContainerStarted","Data":"0801819f67dabe7760085f3a754687f3fe5fac9d92118f6675da8d9bd76f44b7"} Nov 25 15:05:34 crc kubenswrapper[4731]: I1125 15:05:34.209591 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-s2z26" Nov 25 15:05:34 crc kubenswrapper[4731]: I1125 15:05:34.209902 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-7bwk9" event={"ID":"4d05d1b3-fbd6-4efc-83a7-b446881f41a1","Type":"ContainerStarted","Data":"3ac13a58dc875a6a8baf8d5e482c716af90b9d978c5d1144a146a187b32109e9"} Nov 25 15:05:34 crc kubenswrapper[4731]: I1125 15:05:34.211104 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-z8qcw" event={"ID":"e6f08a04-49da-431f-8ad5-2669b0395a83","Type":"ContainerStarted","Data":"0e9217bc78227b3484071bd71b9cfa3075f923afe1760e5e513be61d33217ae4"} Nov 25 15:05:34 crc kubenswrapper[4731]: I1125 15:05:34.250365 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-s2z26" podStartSLOduration=1.537146415 podStartE2EDuration="8.250341427s" podCreationTimestamp="2025-11-25 15:05:26 +0000 UTC" firstStartedPulling="2025-11-25 15:05:27.209224821 +0000 UTC m=+685.956359251" lastFinishedPulling="2025-11-25 15:05:33.922419833 +0000 UTC m=+692.669554263" observedRunningTime="2025-11-25 15:05:34.230365504 +0000 UTC m=+692.977499934" watchObservedRunningTime="2025-11-25 15:05:34.250341427 +0000 UTC m=+692.997475857" Nov 25 15:05:34 crc kubenswrapper[4731]: I1125 15:05:34.251984 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-7bwk9" podStartSLOduration=1.828685514 podStartE2EDuration="8.251976679s" podCreationTimestamp="2025-11-25 15:05:26 +0000 UTC" firstStartedPulling="2025-11-25 15:05:27.175379748 +0000 UTC m=+685.922514178" lastFinishedPulling="2025-11-25 15:05:33.598670913 +0000 UTC m=+692.345805343" observedRunningTime="2025-11-25 15:05:34.245931026 +0000 UTC m=+692.993065456" watchObservedRunningTime="2025-11-25 15:05:34.251976679 +0000 UTC m=+692.999111119" Nov 25 15:05:41 crc kubenswrapper[4731]: I1125 15:05:41.922717 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-s2z26" Nov 25 15:05:41 crc kubenswrapper[4731]: I1125 15:05:41.939129 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-z8qcw" podStartSLOduration=9.468275686 podStartE2EDuration="15.939101339s" podCreationTimestamp="2025-11-25 15:05:26 +0000 UTC" firstStartedPulling="2025-11-25 15:05:27.126029724 +0000 UTC m=+685.873164154" lastFinishedPulling="2025-11-25 15:05:33.596855377 +0000 UTC m=+692.343989807" observedRunningTime="2025-11-25 15:05:34.265856548 +0000 UTC m=+693.012990978" watchObservedRunningTime="2025-11-25 15:05:41.939101339 +0000 UTC m=+700.686235769" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.459683 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hc8cx"] Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.461529 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovn-controller" containerID="cri-o://35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f" gracePeriod=30 Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.461600 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3" gracePeriod=30 Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.461757 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="kube-rbac-proxy-node" containerID="cri-o://ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d" gracePeriod=30 Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.461589 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="sbdb" containerID="cri-o://3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751" gracePeriod=30 Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.461859 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="nbdb" containerID="cri-o://f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8" gracePeriod=30 Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.461925 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="northd" containerID="cri-o://9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091" gracePeriod=30 Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.462110 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovn-acl-logging" containerID="cri-o://0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38" gracePeriod=30 Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.510635 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovnkube-controller" containerID="cri-o://456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e" gracePeriod=30 Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.820086 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovnkube-controller/3.log" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.823648 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovn-acl-logging/0.log" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.824420 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovn-controller/0.log" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.825240 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.841201 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-kubelet\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.841324 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.881390 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-c8fzn"] Nov 25 15:05:57 crc kubenswrapper[4731]: E1125 15:05:57.881745 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="kubecfg-setup" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.881762 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="kubecfg-setup" Nov 25 15:05:57 crc kubenswrapper[4731]: E1125 15:05:57.881774 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovn-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.881781 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovn-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: E1125 15:05:57.881791 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="sbdb" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.881798 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="sbdb" Nov 25 15:05:57 crc kubenswrapper[4731]: E1125 15:05:57.881808 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="kube-rbac-proxy-node" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.881872 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="kube-rbac-proxy-node" Nov 25 15:05:57 crc kubenswrapper[4731]: E1125 15:05:57.881885 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovnkube-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.881893 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovnkube-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: E1125 15:05:57.881906 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovnkube-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.881912 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovnkube-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: E1125 15:05:57.881919 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.881924 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 15:05:57 crc kubenswrapper[4731]: E1125 15:05:57.881931 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="nbdb" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.881938 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="nbdb" Nov 25 15:05:57 crc kubenswrapper[4731]: E1125 15:05:57.881946 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="northd" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.881952 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="northd" Nov 25 15:05:57 crc kubenswrapper[4731]: E1125 15:05:57.881961 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovnkube-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.881968 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovnkube-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: E1125 15:05:57.881975 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovn-acl-logging" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.881982 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovn-acl-logging" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.882087 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovnkube-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.882095 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="northd" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.882101 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="nbdb" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.882110 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovnkube-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.882120 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="kube-rbac-proxy-node" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.882128 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovnkube-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.882136 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="kube-rbac-proxy-ovn-metrics" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.882144 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovnkube-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.882151 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="sbdb" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.882158 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovnkube-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.882166 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovn-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.882174 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovn-acl-logging" Nov 25 15:05:57 crc kubenswrapper[4731]: E1125 15:05:57.882275 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovnkube-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.882284 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovnkube-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: E1125 15:05:57.882465 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovnkube-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.882472 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerName="ovnkube-controller" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.883980 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.941935 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-run-netns\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942080 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942440 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-log-socket\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942480 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-node-log\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942510 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-etc-openvswitch\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942545 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-env-overrides\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942573 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-var-lib-openvswitch\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942569 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-log-socket" (OuterVolumeSpecName: "log-socket") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942598 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-cni-netd\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942635 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942651 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-systemd-units\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942650 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942663 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942682 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovnkube-config\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942693 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942713 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-node-log" (OuterVolumeSpecName: "node-log") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942722 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-openvswitch\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942764 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovn-node-metrics-cert\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942791 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-var-lib-cni-networks-ovn-kubernetes\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942806 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942835 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-slash\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942868 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-slash" (OuterVolumeSpecName: "host-slash") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942886 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-run-ovn-kubernetes\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942908 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942916 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovnkube-script-lib\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942955 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdcdt\" (UniqueName: \"kubernetes.io/projected/2fa0e665-3cc9-4bb9-9758-75bb8db39513-kube-api-access-xdcdt\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.942982 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-ovn\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943004 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-systemd\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943044 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-cni-bin\") pod \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\" (UID: \"2fa0e665-3cc9-4bb9-9758-75bb8db39513\") " Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943269 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943279 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943286 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943305 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943307 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943327 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-run-netns\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943343 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943360 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-cni-bin\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943399 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-etc-openvswitch\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943428 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943539 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-log-socket\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943600 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-run-ovn-kubernetes\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943669 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-systemd-units\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943709 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5cdd6684-a800-426b-8854-0b3f9430508f-ovnkube-config\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943765 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5cdd6684-a800-426b-8854-0b3f9430508f-ovnkube-script-lib\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943803 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-slash\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943868 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-run-ovn\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943907 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmmsx\" (UniqueName: \"kubernetes.io/projected/5cdd6684-a800-426b-8854-0b3f9430508f-kube-api-access-cmmsx\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943934 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-run-openvswitch\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943956 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5cdd6684-a800-426b-8854-0b3f9430508f-env-overrides\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.943990 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-var-lib-openvswitch\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944020 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-node-log\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944050 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5cdd6684-a800-426b-8854-0b3f9430508f-ovn-node-metrics-cert\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944078 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-run-systemd\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944171 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-kubelet\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944235 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-cni-netd\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944368 4731 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-systemd-units\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944390 4731 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovnkube-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944399 4731 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944415 4731 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944427 4731 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-slash\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944438 4731 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944449 4731 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944457 4731 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944466 4731 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-cni-bin\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944475 4731 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-run-netns\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944488 4731 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-kubelet\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944497 4731 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-log-socket\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944506 4731 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-node-log\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944514 4731 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944525 4731 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/2fa0e665-3cc9-4bb9-9758-75bb8db39513-env-overrides\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944535 4731 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.944546 4731 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-host-cni-netd\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.951432 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa0e665-3cc9-4bb9-9758-75bb8db39513-kube-api-access-xdcdt" (OuterVolumeSpecName: "kube-api-access-xdcdt") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "kube-api-access-xdcdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.951923 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:05:57 crc kubenswrapper[4731]: I1125 15:05:57.960247 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "2fa0e665-3cc9-4bb9-9758-75bb8db39513" (UID: "2fa0e665-3cc9-4bb9-9758-75bb8db39513"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.046543 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-log-socket\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.046632 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-run-ovn-kubernetes\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.046681 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-systemd-units\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.046725 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5cdd6684-a800-426b-8854-0b3f9430508f-ovnkube-config\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.046746 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-log-socket\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.046765 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5cdd6684-a800-426b-8854-0b3f9430508f-ovnkube-script-lib\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.046851 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-run-ovn-kubernetes\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.046917 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-slash\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.046962 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-systemd-units\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.046971 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-run-ovn\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047003 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-slash\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047014 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmmsx\" (UniqueName: \"kubernetes.io/projected/5cdd6684-a800-426b-8854-0b3f9430508f-kube-api-access-cmmsx\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047045 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-run-openvswitch\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047068 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5cdd6684-a800-426b-8854-0b3f9430508f-env-overrides\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047096 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-var-lib-openvswitch\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047121 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-node-log\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047143 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-run-openvswitch\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047149 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5cdd6684-a800-426b-8854-0b3f9430508f-ovn-node-metrics-cert\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047213 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-run-systemd\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047248 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-kubelet\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047276 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-cni-netd\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047334 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-run-netns\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047363 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-cni-bin\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047394 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-etc-openvswitch\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047395 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-var-lib-openvswitch\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047448 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-node-log\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047499 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-run-netns\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047512 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-cni-netd\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047454 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047536 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-cni-bin\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047358 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-run-systemd\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047422 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047613 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-etc-openvswitch\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047644 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-host-kubelet\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047204 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5cdd6684-a800-426b-8854-0b3f9430508f-run-ovn\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047751 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5cdd6684-a800-426b-8854-0b3f9430508f-ovnkube-script-lib\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047851 4731 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/2fa0e665-3cc9-4bb9-9758-75bb8db39513-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047909 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdcdt\" (UniqueName: \"kubernetes.io/projected/2fa0e665-3cc9-4bb9-9758-75bb8db39513-kube-api-access-xdcdt\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047933 4731 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/2fa0e665-3cc9-4bb9-9758-75bb8db39513-run-systemd\") on node \"crc\" DevicePath \"\"" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.047928 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5cdd6684-a800-426b-8854-0b3f9430508f-env-overrides\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.048460 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5cdd6684-a800-426b-8854-0b3f9430508f-ovnkube-config\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.051301 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5cdd6684-a800-426b-8854-0b3f9430508f-ovn-node-metrics-cert\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.066145 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmmsx\" (UniqueName: \"kubernetes.io/projected/5cdd6684-a800-426b-8854-0b3f9430508f-kube-api-access-cmmsx\") pod \"ovnkube-node-c8fzn\" (UID: \"5cdd6684-a800-426b-8854-0b3f9430508f\") " pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.199291 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:05:58 crc kubenswrapper[4731]: W1125 15:05:58.230638 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cdd6684_a800_426b_8854_0b3f9430508f.slice/crio-9f11b54079823e5457f0b9d0ae4718032984ef8f884f74edad9c968f7187ccbb WatchSource:0}: Error finding container 9f11b54079823e5457f0b9d0ae4718032984ef8f884f74edad9c968f7187ccbb: Status 404 returned error can't find the container with id 9f11b54079823e5457f0b9d0ae4718032984ef8f884f74edad9c968f7187ccbb Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.371616 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovnkube-controller/3.log" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.374864 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovn-acl-logging/0.log" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.375457 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hc8cx_2fa0e665-3cc9-4bb9-9758-75bb8db39513/ovn-controller/0.log" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376254 4731 generic.go:334] "Generic (PLEG): container finished" podID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerID="456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e" exitCode=0 Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376289 4731 generic.go:334] "Generic (PLEG): container finished" podID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerID="3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751" exitCode=0 Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376298 4731 generic.go:334] "Generic (PLEG): container finished" podID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerID="f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8" exitCode=0 Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376305 4731 generic.go:334] "Generic (PLEG): container finished" podID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerID="9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091" exitCode=0 Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376312 4731 generic.go:334] "Generic (PLEG): container finished" podID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerID="053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3" exitCode=0 Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376319 4731 generic.go:334] "Generic (PLEG): container finished" podID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerID="ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d" exitCode=0 Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376328 4731 generic.go:334] "Generic (PLEG): container finished" podID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerID="0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38" exitCode=143 Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376337 4731 generic.go:334] "Generic (PLEG): container finished" podID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" containerID="35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f" exitCode=143 Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376338 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerDied","Data":"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376394 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerDied","Data":"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376402 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376410 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerDied","Data":"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376426 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerDied","Data":"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376440 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerDied","Data":"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376456 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerDied","Data":"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376427 4731 scope.go:117] "RemoveContainer" containerID="456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376468 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376600 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376621 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376639 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376664 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376672 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376679 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376685 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376693 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376718 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerDied","Data":"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376785 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376852 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376860 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376867 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376873 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376879 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376886 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376893 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376900 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376927 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376936 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerDied","Data":"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376948 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376959 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376967 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376973 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.376981 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.377011 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.377018 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.377025 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.377033 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.377040 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.377051 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hc8cx" event={"ID":"2fa0e665-3cc9-4bb9-9758-75bb8db39513","Type":"ContainerDied","Data":"8c041fd560217950e94869b93c1f216f39d6ebe796d4647e8c5649613919fc7d"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.377090 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.377102 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.377111 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.377118 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.377126 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.377134 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.377141 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.377174 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.377185 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.377192 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.378395 4731 generic.go:334] "Generic (PLEG): container finished" podID="5cdd6684-a800-426b-8854-0b3f9430508f" containerID="8ca180fe77339b78c555cd2e17f74719acdb4fc2d0304fabdc98f102ac3e9ab8" exitCode=0 Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.378474 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" event={"ID":"5cdd6684-a800-426b-8854-0b3f9430508f","Type":"ContainerDied","Data":"8ca180fe77339b78c555cd2e17f74719acdb4fc2d0304fabdc98f102ac3e9ab8"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.378497 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" event={"ID":"5cdd6684-a800-426b-8854-0b3f9430508f","Type":"ContainerStarted","Data":"9f11b54079823e5457f0b9d0ae4718032984ef8f884f74edad9c968f7187ccbb"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.382807 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xx4w_062d0f18-096f-4f99-9faa-74dbd454bcd0/kube-multus/2.log" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.383694 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xx4w_062d0f18-096f-4f99-9faa-74dbd454bcd0/kube-multus/1.log" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.383776 4731 generic.go:334] "Generic (PLEG): container finished" podID="062d0f18-096f-4f99-9faa-74dbd454bcd0" containerID="9e6b96b089080d9bf222e63145a44ce77ca0367c32031873757a6d7124b3aae9" exitCode=2 Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.383848 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8xx4w" event={"ID":"062d0f18-096f-4f99-9faa-74dbd454bcd0","Type":"ContainerDied","Data":"9e6b96b089080d9bf222e63145a44ce77ca0367c32031873757a6d7124b3aae9"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.383896 4731 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"471b56a9f37dafa266c74c8ad94d25f2ba08b9c1f907a69ba186a3ff2b1517d2"} Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.384293 4731 scope.go:117] "RemoveContainer" containerID="9e6b96b089080d9bf222e63145a44ce77ca0367c32031873757a6d7124b3aae9" Nov 25 15:05:58 crc kubenswrapper[4731]: E1125 15:05:58.384471 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-8xx4w_openshift-multus(062d0f18-096f-4f99-9faa-74dbd454bcd0)\"" pod="openshift-multus/multus-8xx4w" podUID="062d0f18-096f-4f99-9faa-74dbd454bcd0" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.435387 4731 scope.go:117] "RemoveContainer" containerID="fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.470846 4731 scope.go:117] "RemoveContainer" containerID="3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.485081 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hc8cx"] Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.489353 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hc8cx"] Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.498439 4731 scope.go:117] "RemoveContainer" containerID="f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.515909 4731 scope.go:117] "RemoveContainer" containerID="9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.535277 4731 scope.go:117] "RemoveContainer" containerID="053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.549868 4731 scope.go:117] "RemoveContainer" containerID="ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.659510 4731 scope.go:117] "RemoveContainer" containerID="0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.680977 4731 scope.go:117] "RemoveContainer" containerID="35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.711676 4731 scope.go:117] "RemoveContainer" containerID="b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.735941 4731 scope.go:117] "RemoveContainer" containerID="456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e" Nov 25 15:05:58 crc kubenswrapper[4731]: E1125 15:05:58.736370 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e\": container with ID starting with 456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e not found: ID does not exist" containerID="456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.736414 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e"} err="failed to get container status \"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e\": rpc error: code = NotFound desc = could not find container \"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e\": container with ID starting with 456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.736442 4731 scope.go:117] "RemoveContainer" containerID="fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6" Nov 25 15:05:58 crc kubenswrapper[4731]: E1125 15:05:58.736901 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\": container with ID starting with fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6 not found: ID does not exist" containerID="fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.736941 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6"} err="failed to get container status \"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\": rpc error: code = NotFound desc = could not find container \"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\": container with ID starting with fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.737169 4731 scope.go:117] "RemoveContainer" containerID="3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751" Nov 25 15:05:58 crc kubenswrapper[4731]: E1125 15:05:58.737765 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\": container with ID starting with 3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751 not found: ID does not exist" containerID="3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.737872 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751"} err="failed to get container status \"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\": rpc error: code = NotFound desc = could not find container \"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\": container with ID starting with 3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.737955 4731 scope.go:117] "RemoveContainer" containerID="f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8" Nov 25 15:05:58 crc kubenswrapper[4731]: E1125 15:05:58.738783 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\": container with ID starting with f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8 not found: ID does not exist" containerID="f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.738825 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8"} err="failed to get container status \"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\": rpc error: code = NotFound desc = could not find container \"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\": container with ID starting with f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.738848 4731 scope.go:117] "RemoveContainer" containerID="9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091" Nov 25 15:05:58 crc kubenswrapper[4731]: E1125 15:05:58.739142 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\": container with ID starting with 9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091 not found: ID does not exist" containerID="9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.739173 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091"} err="failed to get container status \"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\": rpc error: code = NotFound desc = could not find container \"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\": container with ID starting with 9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.739194 4731 scope.go:117] "RemoveContainer" containerID="053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3" Nov 25 15:05:58 crc kubenswrapper[4731]: E1125 15:05:58.739582 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\": container with ID starting with 053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3 not found: ID does not exist" containerID="053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.739606 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3"} err="failed to get container status \"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\": rpc error: code = NotFound desc = could not find container \"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\": container with ID starting with 053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.739621 4731 scope.go:117] "RemoveContainer" containerID="ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d" Nov 25 15:05:58 crc kubenswrapper[4731]: E1125 15:05:58.739940 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\": container with ID starting with ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d not found: ID does not exist" containerID="ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.739968 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d"} err="failed to get container status \"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\": rpc error: code = NotFound desc = could not find container \"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\": container with ID starting with ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.739983 4731 scope.go:117] "RemoveContainer" containerID="0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38" Nov 25 15:05:58 crc kubenswrapper[4731]: E1125 15:05:58.740359 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\": container with ID starting with 0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38 not found: ID does not exist" containerID="0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.740393 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38"} err="failed to get container status \"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\": rpc error: code = NotFound desc = could not find container \"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\": container with ID starting with 0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.740440 4731 scope.go:117] "RemoveContainer" containerID="35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f" Nov 25 15:05:58 crc kubenswrapper[4731]: E1125 15:05:58.740697 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\": container with ID starting with 35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f not found: ID does not exist" containerID="35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.740726 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f"} err="failed to get container status \"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\": rpc error: code = NotFound desc = could not find container \"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\": container with ID starting with 35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.740746 4731 scope.go:117] "RemoveContainer" containerID="b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb" Nov 25 15:05:58 crc kubenswrapper[4731]: E1125 15:05:58.741084 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\": container with ID starting with b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb not found: ID does not exist" containerID="b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.741107 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb"} err="failed to get container status \"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\": rpc error: code = NotFound desc = could not find container \"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\": container with ID starting with b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.741121 4731 scope.go:117] "RemoveContainer" containerID="456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.741454 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e"} err="failed to get container status \"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e\": rpc error: code = NotFound desc = could not find container \"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e\": container with ID starting with 456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.741479 4731 scope.go:117] "RemoveContainer" containerID="fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.741900 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6"} err="failed to get container status \"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\": rpc error: code = NotFound desc = could not find container \"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\": container with ID starting with fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.741926 4731 scope.go:117] "RemoveContainer" containerID="3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.742282 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751"} err="failed to get container status \"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\": rpc error: code = NotFound desc = could not find container \"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\": container with ID starting with 3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.742305 4731 scope.go:117] "RemoveContainer" containerID="f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.742661 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8"} err="failed to get container status \"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\": rpc error: code = NotFound desc = could not find container \"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\": container with ID starting with f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.742688 4731 scope.go:117] "RemoveContainer" containerID="9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.743865 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091"} err="failed to get container status \"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\": rpc error: code = NotFound desc = could not find container \"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\": container with ID starting with 9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.743893 4731 scope.go:117] "RemoveContainer" containerID="053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.744363 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3"} err="failed to get container status \"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\": rpc error: code = NotFound desc = could not find container \"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\": container with ID starting with 053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.744392 4731 scope.go:117] "RemoveContainer" containerID="ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.745036 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d"} err="failed to get container status \"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\": rpc error: code = NotFound desc = could not find container \"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\": container with ID starting with ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.745068 4731 scope.go:117] "RemoveContainer" containerID="0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.747072 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38"} err="failed to get container status \"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\": rpc error: code = NotFound desc = could not find container \"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\": container with ID starting with 0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.747104 4731 scope.go:117] "RemoveContainer" containerID="35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.747450 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f"} err="failed to get container status \"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\": rpc error: code = NotFound desc = could not find container \"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\": container with ID starting with 35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.747474 4731 scope.go:117] "RemoveContainer" containerID="b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.747714 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb"} err="failed to get container status \"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\": rpc error: code = NotFound desc = could not find container \"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\": container with ID starting with b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.747736 4731 scope.go:117] "RemoveContainer" containerID="456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.747957 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e"} err="failed to get container status \"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e\": rpc error: code = NotFound desc = could not find container \"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e\": container with ID starting with 456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.747983 4731 scope.go:117] "RemoveContainer" containerID="fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.750036 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6"} err="failed to get container status \"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\": rpc error: code = NotFound desc = could not find container \"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\": container with ID starting with fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.750070 4731 scope.go:117] "RemoveContainer" containerID="3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.750305 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751"} err="failed to get container status \"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\": rpc error: code = NotFound desc = could not find container \"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\": container with ID starting with 3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.750329 4731 scope.go:117] "RemoveContainer" containerID="f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.750524 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8"} err="failed to get container status \"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\": rpc error: code = NotFound desc = could not find container \"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\": container with ID starting with f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.750549 4731 scope.go:117] "RemoveContainer" containerID="9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.750723 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091"} err="failed to get container status \"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\": rpc error: code = NotFound desc = could not find container \"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\": container with ID starting with 9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.750745 4731 scope.go:117] "RemoveContainer" containerID="053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.750945 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3"} err="failed to get container status \"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\": rpc error: code = NotFound desc = could not find container \"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\": container with ID starting with 053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.750970 4731 scope.go:117] "RemoveContainer" containerID="ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.751148 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d"} err="failed to get container status \"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\": rpc error: code = NotFound desc = could not find container \"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\": container with ID starting with ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.751172 4731 scope.go:117] "RemoveContainer" containerID="0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.760962 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38"} err="failed to get container status \"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\": rpc error: code = NotFound desc = could not find container \"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\": container with ID starting with 0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.761039 4731 scope.go:117] "RemoveContainer" containerID="35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.761427 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f"} err="failed to get container status \"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\": rpc error: code = NotFound desc = could not find container \"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\": container with ID starting with 35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.761458 4731 scope.go:117] "RemoveContainer" containerID="b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.761669 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb"} err="failed to get container status \"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\": rpc error: code = NotFound desc = could not find container \"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\": container with ID starting with b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.761701 4731 scope.go:117] "RemoveContainer" containerID="456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.766366 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e"} err="failed to get container status \"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e\": rpc error: code = NotFound desc = could not find container \"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e\": container with ID starting with 456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.766431 4731 scope.go:117] "RemoveContainer" containerID="fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.766891 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6"} err="failed to get container status \"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\": rpc error: code = NotFound desc = could not find container \"fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6\": container with ID starting with fe00e34a8caf0ca1af8b2c414a37f032085e7b175e3189e727fcd1a98af17ce6 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.766962 4731 scope.go:117] "RemoveContainer" containerID="3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.767999 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751"} err="failed to get container status \"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\": rpc error: code = NotFound desc = could not find container \"3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751\": container with ID starting with 3af9e54dc17fa3863e9c75e2d107237850703961185086c139c48af25b6fa751 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.768030 4731 scope.go:117] "RemoveContainer" containerID="f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.768602 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8"} err="failed to get container status \"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\": rpc error: code = NotFound desc = could not find container \"f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8\": container with ID starting with f35da956011bb8e7807955637e4c73139342e0393d1318eebb83adb2049f83c8 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.768634 4731 scope.go:117] "RemoveContainer" containerID="9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.769334 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091"} err="failed to get container status \"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\": rpc error: code = NotFound desc = could not find container \"9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091\": container with ID starting with 9d2739aa2482a2987b8134d9b12a074fb87eefb784485235a388872f426eb091 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.769368 4731 scope.go:117] "RemoveContainer" containerID="053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.769917 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3"} err="failed to get container status \"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\": rpc error: code = NotFound desc = could not find container \"053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3\": container with ID starting with 053381c11623cc520f3a4256b601191bba3ad1d79da16aae8b7f5467b4d1e6c3 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.769942 4731 scope.go:117] "RemoveContainer" containerID="ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.770352 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d"} err="failed to get container status \"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\": rpc error: code = NotFound desc = could not find container \"ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d\": container with ID starting with ac9f4399525b8c55edf312ed78afc8b051a31b8b46b5763757c27e5c0b78f55d not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.770375 4731 scope.go:117] "RemoveContainer" containerID="0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.770707 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38"} err="failed to get container status \"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\": rpc error: code = NotFound desc = could not find container \"0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38\": container with ID starting with 0da15dd1b4412e9bbffaac8d958a527dd0190b2e9bf01f20d25391dcf09dad38 not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.770733 4731 scope.go:117] "RemoveContainer" containerID="35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.771242 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f"} err="failed to get container status \"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\": rpc error: code = NotFound desc = could not find container \"35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f\": container with ID starting with 35be40069ea0dd9166bd56deff63f9b477ec82bec60bbe8a72c2e76a92a5d95f not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.771268 4731 scope.go:117] "RemoveContainer" containerID="b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.771588 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb"} err="failed to get container status \"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\": rpc error: code = NotFound desc = could not find container \"b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb\": container with ID starting with b36cbc6571bbb03456bb10a390c73735bd63d854d61dc3aae4262ac4c41e78cb not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.771609 4731 scope.go:117] "RemoveContainer" containerID="456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.771886 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e"} err="failed to get container status \"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e\": rpc error: code = NotFound desc = could not find container \"456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e\": container with ID starting with 456e7042acc67ba5e4f5fd8683e156b049a30a6cc7824d9b221677c757d6149e not found: ID does not exist" Nov 25 15:05:58 crc kubenswrapper[4731]: I1125 15:05:58.820997 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa0e665-3cc9-4bb9-9758-75bb8db39513" path="/var/lib/kubelet/pods/2fa0e665-3cc9-4bb9-9758-75bb8db39513/volumes" Nov 25 15:05:59 crc kubenswrapper[4731]: I1125 15:05:59.395181 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" event={"ID":"5cdd6684-a800-426b-8854-0b3f9430508f","Type":"ContainerStarted","Data":"a6009033d0c247b8b309d4a77c42134ca7be3fcf448e02cefdf9461befb7a063"} Nov 25 15:05:59 crc kubenswrapper[4731]: I1125 15:05:59.395775 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" event={"ID":"5cdd6684-a800-426b-8854-0b3f9430508f","Type":"ContainerStarted","Data":"977a39d8c3da72917450444f7580c4a2ff3fb3c1d34608441f747ad34847a305"} Nov 25 15:05:59 crc kubenswrapper[4731]: I1125 15:05:59.395793 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" event={"ID":"5cdd6684-a800-426b-8854-0b3f9430508f","Type":"ContainerStarted","Data":"00ba213de709864ee0d8a150d6df7a92b2c21e3877fd6778f9644a220c3ce190"} Nov 25 15:05:59 crc kubenswrapper[4731]: I1125 15:05:59.395804 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" event={"ID":"5cdd6684-a800-426b-8854-0b3f9430508f","Type":"ContainerStarted","Data":"a50c7365e17cac5612045dd24a011dbb6454ee48f47a757393a5bdb837c46594"} Nov 25 15:05:59 crc kubenswrapper[4731]: I1125 15:05:59.395830 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" event={"ID":"5cdd6684-a800-426b-8854-0b3f9430508f","Type":"ContainerStarted","Data":"1470935e55979fc4f3c57b6e4f1446d2d0a7ee0a245b8c3ba15c5710fafa39aa"} Nov 25 15:06:00 crc kubenswrapper[4731]: I1125 15:06:00.407863 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" event={"ID":"5cdd6684-a800-426b-8854-0b3f9430508f","Type":"ContainerStarted","Data":"1895a210b83022adfc22a5bd908f8524d84a8703f8328439b0d271c25c8515be"} Nov 25 15:06:02 crc kubenswrapper[4731]: I1125 15:06:02.424957 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" event={"ID":"5cdd6684-a800-426b-8854-0b3f9430508f","Type":"ContainerStarted","Data":"b2b7277e6526ce011e4a32af13fb332cf28f6dd40da1ae94f65301035a902240"} Nov 25 15:06:03 crc kubenswrapper[4731]: I1125 15:06:03.054687 4731 scope.go:117] "RemoveContainer" containerID="471b56a9f37dafa266c74c8ad94d25f2ba08b9c1f907a69ba186a3ff2b1517d2" Nov 25 15:06:03 crc kubenswrapper[4731]: I1125 15:06:03.433300 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xx4w_062d0f18-096f-4f99-9faa-74dbd454bcd0/kube-multus/2.log" Nov 25 15:06:04 crc kubenswrapper[4731]: I1125 15:06:04.445575 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" event={"ID":"5cdd6684-a800-426b-8854-0b3f9430508f","Type":"ContainerStarted","Data":"7f9eeddd9dbc732b1505c434cab59d145b9c62246699cb69d3a2448c175b153b"} Nov 25 15:06:04 crc kubenswrapper[4731]: I1125 15:06:04.446174 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:06:04 crc kubenswrapper[4731]: I1125 15:06:04.446191 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:06:04 crc kubenswrapper[4731]: I1125 15:06:04.446218 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:06:04 crc kubenswrapper[4731]: I1125 15:06:04.477669 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:06:04 crc kubenswrapper[4731]: I1125 15:06:04.477748 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:06:04 crc kubenswrapper[4731]: I1125 15:06:04.480936 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" podStartSLOduration=7.480915269 podStartE2EDuration="7.480915269s" podCreationTimestamp="2025-11-25 15:05:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:06:04.480762895 +0000 UTC m=+723.227897325" watchObservedRunningTime="2025-11-25 15:06:04.480915269 +0000 UTC m=+723.228049699" Nov 25 15:06:10 crc kubenswrapper[4731]: I1125 15:06:10.811877 4731 scope.go:117] "RemoveContainer" containerID="9e6b96b089080d9bf222e63145a44ce77ca0367c32031873757a6d7124b3aae9" Nov 25 15:06:10 crc kubenswrapper[4731]: E1125 15:06:10.813262 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-8xx4w_openshift-multus(062d0f18-096f-4f99-9faa-74dbd454bcd0)\"" pod="openshift-multus/multus-8xx4w" podUID="062d0f18-096f-4f99-9faa-74dbd454bcd0" Nov 25 15:06:22 crc kubenswrapper[4731]: I1125 15:06:22.814451 4731 scope.go:117] "RemoveContainer" containerID="9e6b96b089080d9bf222e63145a44ce77ca0367c32031873757a6d7124b3aae9" Nov 25 15:06:23 crc kubenswrapper[4731]: I1125 15:06:23.566133 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xx4w_062d0f18-096f-4f99-9faa-74dbd454bcd0/kube-multus/2.log" Nov 25 15:06:23 crc kubenswrapper[4731]: I1125 15:06:23.566749 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8xx4w" event={"ID":"062d0f18-096f-4f99-9faa-74dbd454bcd0","Type":"ContainerStarted","Data":"94ebcd9dbbf5dcb97d5871e86f78f2e84a2fcdf9b936ce9d058e7a4bc05e8bf4"} Nov 25 15:06:26 crc kubenswrapper[4731]: I1125 15:06:26.272185 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh"] Nov 25 15:06:26 crc kubenswrapper[4731]: I1125 15:06:26.273558 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" Nov 25 15:06:26 crc kubenswrapper[4731]: I1125 15:06:26.275777 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 15:06:26 crc kubenswrapper[4731]: I1125 15:06:26.288965 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh"] Nov 25 15:06:26 crc kubenswrapper[4731]: I1125 15:06:26.459845 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9lbq\" (UniqueName: \"kubernetes.io/projected/b6d7e2d5-3efb-47ec-9cb5-685467f22976-kube-api-access-m9lbq\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh\" (UID: \"b6d7e2d5-3efb-47ec-9cb5-685467f22976\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" Nov 25 15:06:26 crc kubenswrapper[4731]: I1125 15:06:26.459980 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6d7e2d5-3efb-47ec-9cb5-685467f22976-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh\" (UID: \"b6d7e2d5-3efb-47ec-9cb5-685467f22976\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" Nov 25 15:06:26 crc kubenswrapper[4731]: I1125 15:06:26.460181 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6d7e2d5-3efb-47ec-9cb5-685467f22976-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh\" (UID: \"b6d7e2d5-3efb-47ec-9cb5-685467f22976\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" Nov 25 15:06:26 crc kubenswrapper[4731]: I1125 15:06:26.561866 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9lbq\" (UniqueName: \"kubernetes.io/projected/b6d7e2d5-3efb-47ec-9cb5-685467f22976-kube-api-access-m9lbq\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh\" (UID: \"b6d7e2d5-3efb-47ec-9cb5-685467f22976\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" Nov 25 15:06:26 crc kubenswrapper[4731]: I1125 15:06:26.562109 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6d7e2d5-3efb-47ec-9cb5-685467f22976-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh\" (UID: \"b6d7e2d5-3efb-47ec-9cb5-685467f22976\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" Nov 25 15:06:26 crc kubenswrapper[4731]: I1125 15:06:26.562217 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6d7e2d5-3efb-47ec-9cb5-685467f22976-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh\" (UID: \"b6d7e2d5-3efb-47ec-9cb5-685467f22976\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" Nov 25 15:06:26 crc kubenswrapper[4731]: I1125 15:06:26.563093 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6d7e2d5-3efb-47ec-9cb5-685467f22976-bundle\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh\" (UID: \"b6d7e2d5-3efb-47ec-9cb5-685467f22976\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" Nov 25 15:06:26 crc kubenswrapper[4731]: I1125 15:06:26.563347 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6d7e2d5-3efb-47ec-9cb5-685467f22976-util\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh\" (UID: \"b6d7e2d5-3efb-47ec-9cb5-685467f22976\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" Nov 25 15:06:26 crc kubenswrapper[4731]: I1125 15:06:26.601603 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9lbq\" (UniqueName: \"kubernetes.io/projected/b6d7e2d5-3efb-47ec-9cb5-685467f22976-kube-api-access-m9lbq\") pod \"5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh\" (UID: \"b6d7e2d5-3efb-47ec-9cb5-685467f22976\") " pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" Nov 25 15:06:26 crc kubenswrapper[4731]: I1125 15:06:26.890512 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" Nov 25 15:06:27 crc kubenswrapper[4731]: I1125 15:06:27.085617 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh"] Nov 25 15:06:27 crc kubenswrapper[4731]: W1125 15:06:27.091038 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6d7e2d5_3efb_47ec_9cb5_685467f22976.slice/crio-900198a1473abb0f27d06b2e17824fcbfd74f690f1c1b5144f2b877414a4b1a6 WatchSource:0}: Error finding container 900198a1473abb0f27d06b2e17824fcbfd74f690f1c1b5144f2b877414a4b1a6: Status 404 returned error can't find the container with id 900198a1473abb0f27d06b2e17824fcbfd74f690f1c1b5144f2b877414a4b1a6 Nov 25 15:06:27 crc kubenswrapper[4731]: I1125 15:06:27.592074 4731 generic.go:334] "Generic (PLEG): container finished" podID="b6d7e2d5-3efb-47ec-9cb5-685467f22976" containerID="dec4b4f6d423b31ee29628ef4224c159b528bde2aa9bcbcd884cffa045921e19" exitCode=0 Nov 25 15:06:27 crc kubenswrapper[4731]: I1125 15:06:27.592133 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" event={"ID":"b6d7e2d5-3efb-47ec-9cb5-685467f22976","Type":"ContainerDied","Data":"dec4b4f6d423b31ee29628ef4224c159b528bde2aa9bcbcd884cffa045921e19"} Nov 25 15:06:27 crc kubenswrapper[4731]: I1125 15:06:27.592325 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" event={"ID":"b6d7e2d5-3efb-47ec-9cb5-685467f22976","Type":"ContainerStarted","Data":"900198a1473abb0f27d06b2e17824fcbfd74f690f1c1b5144f2b877414a4b1a6"} Nov 25 15:06:28 crc kubenswrapper[4731]: I1125 15:06:28.226876 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-c8fzn" Nov 25 15:06:30 crc kubenswrapper[4731]: I1125 15:06:30.611173 4731 generic.go:334] "Generic (PLEG): container finished" podID="b6d7e2d5-3efb-47ec-9cb5-685467f22976" containerID="f7e5aca72e437ec30d38a2179e507f5498ad46d4fe80e4910ac090e4df3c2f21" exitCode=0 Nov 25 15:06:30 crc kubenswrapper[4731]: I1125 15:06:30.611218 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" event={"ID":"b6d7e2d5-3efb-47ec-9cb5-685467f22976","Type":"ContainerDied","Data":"f7e5aca72e437ec30d38a2179e507f5498ad46d4fe80e4910ac090e4df3c2f21"} Nov 25 15:06:31 crc kubenswrapper[4731]: I1125 15:06:31.619548 4731 generic.go:334] "Generic (PLEG): container finished" podID="b6d7e2d5-3efb-47ec-9cb5-685467f22976" containerID="32d74c5bb12bab09d90c6caa1e847f8da72d1f13a1ae622e3cd0f841cf7c19d2" exitCode=0 Nov 25 15:06:31 crc kubenswrapper[4731]: I1125 15:06:31.619661 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" event={"ID":"b6d7e2d5-3efb-47ec-9cb5-685467f22976","Type":"ContainerDied","Data":"32d74c5bb12bab09d90c6caa1e847f8da72d1f13a1ae622e3cd0f841cf7c19d2"} Nov 25 15:06:32 crc kubenswrapper[4731]: I1125 15:06:32.889916 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.054468 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6d7e2d5-3efb-47ec-9cb5-685467f22976-util\") pod \"b6d7e2d5-3efb-47ec-9cb5-685467f22976\" (UID: \"b6d7e2d5-3efb-47ec-9cb5-685467f22976\") " Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.054547 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9lbq\" (UniqueName: \"kubernetes.io/projected/b6d7e2d5-3efb-47ec-9cb5-685467f22976-kube-api-access-m9lbq\") pod \"b6d7e2d5-3efb-47ec-9cb5-685467f22976\" (UID: \"b6d7e2d5-3efb-47ec-9cb5-685467f22976\") " Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.054619 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6d7e2d5-3efb-47ec-9cb5-685467f22976-bundle\") pod \"b6d7e2d5-3efb-47ec-9cb5-685467f22976\" (UID: \"b6d7e2d5-3efb-47ec-9cb5-685467f22976\") " Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.055547 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d7e2d5-3efb-47ec-9cb5-685467f22976-bundle" (OuterVolumeSpecName: "bundle") pod "b6d7e2d5-3efb-47ec-9cb5-685467f22976" (UID: "b6d7e2d5-3efb-47ec-9cb5-685467f22976"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.060162 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6d7e2d5-3efb-47ec-9cb5-685467f22976-kube-api-access-m9lbq" (OuterVolumeSpecName: "kube-api-access-m9lbq") pod "b6d7e2d5-3efb-47ec-9cb5-685467f22976" (UID: "b6d7e2d5-3efb-47ec-9cb5-685467f22976"). InnerVolumeSpecName "kube-api-access-m9lbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.070684 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6d7e2d5-3efb-47ec-9cb5-685467f22976-util" (OuterVolumeSpecName: "util") pod "b6d7e2d5-3efb-47ec-9cb5-685467f22976" (UID: "b6d7e2d5-3efb-47ec-9cb5-685467f22976"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.155975 4731 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b6d7e2d5-3efb-47ec-9cb5-685467f22976-util\") on node \"crc\" DevicePath \"\"" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.156034 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9lbq\" (UniqueName: \"kubernetes.io/projected/b6d7e2d5-3efb-47ec-9cb5-685467f22976-kube-api-access-m9lbq\") on node \"crc\" DevicePath \"\"" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.156053 4731 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b6d7e2d5-3efb-47ec-9cb5-685467f22976-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.251591 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qzqp4"] Nov 25 15:06:33 crc kubenswrapper[4731]: E1125 15:06:33.252188 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d7e2d5-3efb-47ec-9cb5-685467f22976" containerName="extract" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.252234 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d7e2d5-3efb-47ec-9cb5-685467f22976" containerName="extract" Nov 25 15:06:33 crc kubenswrapper[4731]: E1125 15:06:33.252255 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d7e2d5-3efb-47ec-9cb5-685467f22976" containerName="util" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.252264 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d7e2d5-3efb-47ec-9cb5-685467f22976" containerName="util" Nov 25 15:06:33 crc kubenswrapper[4731]: E1125 15:06:33.252281 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6d7e2d5-3efb-47ec-9cb5-685467f22976" containerName="pull" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.252318 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6d7e2d5-3efb-47ec-9cb5-685467f22976" containerName="pull" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.252571 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6d7e2d5-3efb-47ec-9cb5-685467f22976" containerName="extract" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.254086 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qzqp4" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.264663 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qzqp4"] Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.366188 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ac8f507-158c-4c89-9c07-f64a08a42283-catalog-content\") pod \"redhat-operators-qzqp4\" (UID: \"7ac8f507-158c-4c89-9c07-f64a08a42283\") " pod="openshift-marketplace/redhat-operators-qzqp4" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.366315 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ac8f507-158c-4c89-9c07-f64a08a42283-utilities\") pod \"redhat-operators-qzqp4\" (UID: \"7ac8f507-158c-4c89-9c07-f64a08a42283\") " pod="openshift-marketplace/redhat-operators-qzqp4" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.366370 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqfrt\" (UniqueName: \"kubernetes.io/projected/7ac8f507-158c-4c89-9c07-f64a08a42283-kube-api-access-dqfrt\") pod \"redhat-operators-qzqp4\" (UID: \"7ac8f507-158c-4c89-9c07-f64a08a42283\") " pod="openshift-marketplace/redhat-operators-qzqp4" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.467963 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ac8f507-158c-4c89-9c07-f64a08a42283-utilities\") pod \"redhat-operators-qzqp4\" (UID: \"7ac8f507-158c-4c89-9c07-f64a08a42283\") " pod="openshift-marketplace/redhat-operators-qzqp4" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.468052 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqfrt\" (UniqueName: \"kubernetes.io/projected/7ac8f507-158c-4c89-9c07-f64a08a42283-kube-api-access-dqfrt\") pod \"redhat-operators-qzqp4\" (UID: \"7ac8f507-158c-4c89-9c07-f64a08a42283\") " pod="openshift-marketplace/redhat-operators-qzqp4" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.468094 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ac8f507-158c-4c89-9c07-f64a08a42283-catalog-content\") pod \"redhat-operators-qzqp4\" (UID: \"7ac8f507-158c-4c89-9c07-f64a08a42283\") " pod="openshift-marketplace/redhat-operators-qzqp4" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.468529 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ac8f507-158c-4c89-9c07-f64a08a42283-utilities\") pod \"redhat-operators-qzqp4\" (UID: \"7ac8f507-158c-4c89-9c07-f64a08a42283\") " pod="openshift-marketplace/redhat-operators-qzqp4" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.468567 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ac8f507-158c-4c89-9c07-f64a08a42283-catalog-content\") pod \"redhat-operators-qzqp4\" (UID: \"7ac8f507-158c-4c89-9c07-f64a08a42283\") " pod="openshift-marketplace/redhat-operators-qzqp4" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.490838 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqfrt\" (UniqueName: \"kubernetes.io/projected/7ac8f507-158c-4c89-9c07-f64a08a42283-kube-api-access-dqfrt\") pod \"redhat-operators-qzqp4\" (UID: \"7ac8f507-158c-4c89-9c07-f64a08a42283\") " pod="openshift-marketplace/redhat-operators-qzqp4" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.585313 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qzqp4" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.638554 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" event={"ID":"b6d7e2d5-3efb-47ec-9cb5-685467f22976","Type":"ContainerDied","Data":"900198a1473abb0f27d06b2e17824fcbfd74f690f1c1b5144f2b877414a4b1a6"} Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.639028 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="900198a1473abb0f27d06b2e17824fcbfd74f690f1c1b5144f2b877414a4b1a6" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.638613 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh" Nov 25 15:06:33 crc kubenswrapper[4731]: I1125 15:06:33.809372 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qzqp4"] Nov 25 15:06:33 crc kubenswrapper[4731]: W1125 15:06:33.815850 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ac8f507_158c_4c89_9c07_f64a08a42283.slice/crio-66fd88aa4254488a402804b3cc1853669c48589b803362f8d7012ecb2e7e2281 WatchSource:0}: Error finding container 66fd88aa4254488a402804b3cc1853669c48589b803362f8d7012ecb2e7e2281: Status 404 returned error can't find the container with id 66fd88aa4254488a402804b3cc1853669c48589b803362f8d7012ecb2e7e2281 Nov 25 15:06:34 crc kubenswrapper[4731]: I1125 15:06:34.538386 4731 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Nov 25 15:06:34 crc kubenswrapper[4731]: I1125 15:06:34.647832 4731 generic.go:334] "Generic (PLEG): container finished" podID="7ac8f507-158c-4c89-9c07-f64a08a42283" containerID="4e0572f942801595eb9910052de0ecff48a857f21c90fea73c4f00207a5a83fc" exitCode=0 Nov 25 15:06:34 crc kubenswrapper[4731]: I1125 15:06:34.647933 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzqp4" event={"ID":"7ac8f507-158c-4c89-9c07-f64a08a42283","Type":"ContainerDied","Data":"4e0572f942801595eb9910052de0ecff48a857f21c90fea73c4f00207a5a83fc"} Nov 25 15:06:34 crc kubenswrapper[4731]: I1125 15:06:34.648254 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzqp4" event={"ID":"7ac8f507-158c-4c89-9c07-f64a08a42283","Type":"ContainerStarted","Data":"66fd88aa4254488a402804b3cc1853669c48589b803362f8d7012ecb2e7e2281"} Nov 25 15:06:36 crc kubenswrapper[4731]: I1125 15:06:36.189963 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-zbjzj"] Nov 25 15:06:36 crc kubenswrapper[4731]: I1125 15:06:36.190763 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-zbjzj" Nov 25 15:06:36 crc kubenswrapper[4731]: I1125 15:06:36.192642 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Nov 25 15:06:36 crc kubenswrapper[4731]: I1125 15:06:36.192763 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-89ggz" Nov 25 15:06:36 crc kubenswrapper[4731]: I1125 15:06:36.193688 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Nov 25 15:06:36 crc kubenswrapper[4731]: I1125 15:06:36.208707 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-zbjzj"] Nov 25 15:06:36 crc kubenswrapper[4731]: I1125 15:06:36.309284 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6sl8\" (UniqueName: \"kubernetes.io/projected/5a45a5cc-bdc6-4daa-b2fd-509237906c01-kube-api-access-z6sl8\") pod \"nmstate-operator-557fdffb88-zbjzj\" (UID: \"5a45a5cc-bdc6-4daa-b2fd-509237906c01\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-zbjzj" Nov 25 15:06:36 crc kubenswrapper[4731]: I1125 15:06:36.411043 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6sl8\" (UniqueName: \"kubernetes.io/projected/5a45a5cc-bdc6-4daa-b2fd-509237906c01-kube-api-access-z6sl8\") pod \"nmstate-operator-557fdffb88-zbjzj\" (UID: \"5a45a5cc-bdc6-4daa-b2fd-509237906c01\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-zbjzj" Nov 25 15:06:36 crc kubenswrapper[4731]: I1125 15:06:36.439395 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6sl8\" (UniqueName: \"kubernetes.io/projected/5a45a5cc-bdc6-4daa-b2fd-509237906c01-kube-api-access-z6sl8\") pod \"nmstate-operator-557fdffb88-zbjzj\" (UID: \"5a45a5cc-bdc6-4daa-b2fd-509237906c01\") " pod="openshift-nmstate/nmstate-operator-557fdffb88-zbjzj" Nov 25 15:06:36 crc kubenswrapper[4731]: I1125 15:06:36.507856 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-557fdffb88-zbjzj" Nov 25 15:06:36 crc kubenswrapper[4731]: I1125 15:06:36.984265 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-557fdffb88-zbjzj"] Nov 25 15:06:37 crc kubenswrapper[4731]: I1125 15:06:37.667342 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-zbjzj" event={"ID":"5a45a5cc-bdc6-4daa-b2fd-509237906c01","Type":"ContainerStarted","Data":"d553ce3b9b13ea651711254628bdb9c3a2c598a29b518bef9d1ff0bacbebeb4e"} Nov 25 15:06:40 crc kubenswrapper[4731]: I1125 15:06:40.688980 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-557fdffb88-zbjzj" event={"ID":"5a45a5cc-bdc6-4daa-b2fd-509237906c01","Type":"ContainerStarted","Data":"3679288c13db3569d63901048241dc86c81f81c54523a2edee4e3b18defe69be"} Nov 25 15:06:40 crc kubenswrapper[4731]: I1125 15:06:40.710902 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-557fdffb88-zbjzj" podStartSLOduration=1.8948848630000001 podStartE2EDuration="4.710884904s" podCreationTimestamp="2025-11-25 15:06:36 +0000 UTC" firstStartedPulling="2025-11-25 15:06:36.994721944 +0000 UTC m=+755.741856384" lastFinishedPulling="2025-11-25 15:06:39.810721995 +0000 UTC m=+758.557856425" observedRunningTime="2025-11-25 15:06:40.708234287 +0000 UTC m=+759.455368717" watchObservedRunningTime="2025-11-25 15:06:40.710884904 +0000 UTC m=+759.458019334" Nov 25 15:06:45 crc kubenswrapper[4731]: I1125 15:06:45.721141 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzqp4" event={"ID":"7ac8f507-158c-4c89-9c07-f64a08a42283","Type":"ContainerStarted","Data":"48424744ae54cc45a8561a97b1409eb0bbacc67061d241ca0a2f13ebab508f00"} Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.727724 4731 generic.go:334] "Generic (PLEG): container finished" podID="7ac8f507-158c-4c89-9c07-f64a08a42283" containerID="48424744ae54cc45a8561a97b1409eb0bbacc67061d241ca0a2f13ebab508f00" exitCode=0 Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.727778 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzqp4" event={"ID":"7ac8f507-158c-4c89-9c07-f64a08a42283","Type":"ContainerDied","Data":"48424744ae54cc45a8561a97b1409eb0bbacc67061d241ca0a2f13ebab508f00"} Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.774984 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-rpjkc"] Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.775838 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rpjkc" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.781497 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-g85sx" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.781661 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.782570 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-qjzpd"] Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.783693 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-qjzpd" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.806695 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-58n5h"] Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.807529 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-58n5h" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.866615 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-rpjkc"] Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.869760 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-qjzpd"] Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.884501 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tbhb\" (UniqueName: \"kubernetes.io/projected/c271b7db-3cd3-4045-a954-497f15d9f89d-kube-api-access-6tbhb\") pod \"nmstate-webhook-6b89b748d8-rpjkc\" (UID: \"c271b7db-3cd3-4045-a954-497f15d9f89d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rpjkc" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.884575 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb22f\" (UniqueName: \"kubernetes.io/projected/60a0d26a-381d-43f4-9cd1-e9172996b0e2-kube-api-access-fb22f\") pod \"nmstate-metrics-5dcf9c57c5-qjzpd\" (UID: \"60a0d26a-381d-43f4-9cd1-e9172996b0e2\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-qjzpd" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.884604 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c271b7db-3cd3-4045-a954-497f15d9f89d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-rpjkc\" (UID: \"c271b7db-3cd3-4045-a954-497f15d9f89d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rpjkc" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.915616 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr"] Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.916362 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.918554 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.918724 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-nk2mj" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.918778 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.933588 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr"] Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.986265 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9-ovs-socket\") pod \"nmstate-handler-58n5h\" (UID: \"39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9\") " pod="openshift-nmstate/nmstate-handler-58n5h" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.986331 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9-dbus-socket\") pod \"nmstate-handler-58n5h\" (UID: \"39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9\") " pod="openshift-nmstate/nmstate-handler-58n5h" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.986502 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tbhb\" (UniqueName: \"kubernetes.io/projected/c271b7db-3cd3-4045-a954-497f15d9f89d-kube-api-access-6tbhb\") pod \"nmstate-webhook-6b89b748d8-rpjkc\" (UID: \"c271b7db-3cd3-4045-a954-497f15d9f89d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rpjkc" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.986568 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vswvb\" (UniqueName: \"kubernetes.io/projected/39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9-kube-api-access-vswvb\") pod \"nmstate-handler-58n5h\" (UID: \"39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9\") " pod="openshift-nmstate/nmstate-handler-58n5h" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.986673 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb22f\" (UniqueName: \"kubernetes.io/projected/60a0d26a-381d-43f4-9cd1-e9172996b0e2-kube-api-access-fb22f\") pod \"nmstate-metrics-5dcf9c57c5-qjzpd\" (UID: \"60a0d26a-381d-43f4-9cd1-e9172996b0e2\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-qjzpd" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.986738 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9-nmstate-lock\") pod \"nmstate-handler-58n5h\" (UID: \"39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9\") " pod="openshift-nmstate/nmstate-handler-58n5h" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.986785 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c271b7db-3cd3-4045-a954-497f15d9f89d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-rpjkc\" (UID: \"c271b7db-3cd3-4045-a954-497f15d9f89d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rpjkc" Nov 25 15:06:46 crc kubenswrapper[4731]: I1125 15:06:46.993427 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c271b7db-3cd3-4045-a954-497f15d9f89d-tls-key-pair\") pod \"nmstate-webhook-6b89b748d8-rpjkc\" (UID: \"c271b7db-3cd3-4045-a954-497f15d9f89d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rpjkc" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.005184 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tbhb\" (UniqueName: \"kubernetes.io/projected/c271b7db-3cd3-4045-a954-497f15d9f89d-kube-api-access-6tbhb\") pod \"nmstate-webhook-6b89b748d8-rpjkc\" (UID: \"c271b7db-3cd3-4045-a954-497f15d9f89d\") " pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rpjkc" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.005374 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb22f\" (UniqueName: \"kubernetes.io/projected/60a0d26a-381d-43f4-9cd1-e9172996b0e2-kube-api-access-fb22f\") pod \"nmstate-metrics-5dcf9c57c5-qjzpd\" (UID: \"60a0d26a-381d-43f4-9cd1-e9172996b0e2\") " pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-qjzpd" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.088221 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f5cb68e-43ff-4a3a-9917-b6b8349ddb03-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-p7zbr\" (UID: \"6f5cb68e-43ff-4a3a-9917-b6b8349ddb03\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.088273 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-269gr\" (UniqueName: \"kubernetes.io/projected/6f5cb68e-43ff-4a3a-9917-b6b8349ddb03-kube-api-access-269gr\") pod \"nmstate-console-plugin-5874bd7bc5-p7zbr\" (UID: \"6f5cb68e-43ff-4a3a-9917-b6b8349ddb03\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.088313 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vswvb\" (UniqueName: \"kubernetes.io/projected/39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9-kube-api-access-vswvb\") pod \"nmstate-handler-58n5h\" (UID: \"39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9\") " pod="openshift-nmstate/nmstate-handler-58n5h" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.088407 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9-nmstate-lock\") pod \"nmstate-handler-58n5h\" (UID: \"39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9\") " pod="openshift-nmstate/nmstate-handler-58n5h" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.088449 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9-ovs-socket\") pod \"nmstate-handler-58n5h\" (UID: \"39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9\") " pod="openshift-nmstate/nmstate-handler-58n5h" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.088475 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f5cb68e-43ff-4a3a-9917-b6b8349ddb03-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-p7zbr\" (UID: \"6f5cb68e-43ff-4a3a-9917-b6b8349ddb03\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.088499 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9-dbus-socket\") pod \"nmstate-handler-58n5h\" (UID: \"39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9\") " pod="openshift-nmstate/nmstate-handler-58n5h" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.088515 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9-nmstate-lock\") pod \"nmstate-handler-58n5h\" (UID: \"39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9\") " pod="openshift-nmstate/nmstate-handler-58n5h" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.088574 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9-ovs-socket\") pod \"nmstate-handler-58n5h\" (UID: \"39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9\") " pod="openshift-nmstate/nmstate-handler-58n5h" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.088851 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9-dbus-socket\") pod \"nmstate-handler-58n5h\" (UID: \"39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9\") " pod="openshift-nmstate/nmstate-handler-58n5h" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.100203 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rpjkc" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.107515 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vswvb\" (UniqueName: \"kubernetes.io/projected/39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9-kube-api-access-vswvb\") pod \"nmstate-handler-58n5h\" (UID: \"39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9\") " pod="openshift-nmstate/nmstate-handler-58n5h" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.115383 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-68954d78db-ntdxj"] Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.115958 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-qjzpd" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.116220 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.147156 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68954d78db-ntdxj"] Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.166520 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-58n5h" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.189498 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f5cb68e-43ff-4a3a-9917-b6b8349ddb03-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-p7zbr\" (UID: \"6f5cb68e-43ff-4a3a-9917-b6b8349ddb03\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.189561 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f5cb68e-43ff-4a3a-9917-b6b8349ddb03-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-p7zbr\" (UID: \"6f5cb68e-43ff-4a3a-9917-b6b8349ddb03\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.189589 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-269gr\" (UniqueName: \"kubernetes.io/projected/6f5cb68e-43ff-4a3a-9917-b6b8349ddb03-kube-api-access-269gr\") pod \"nmstate-console-plugin-5874bd7bc5-p7zbr\" (UID: \"6f5cb68e-43ff-4a3a-9917-b6b8349ddb03\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr" Nov 25 15:06:47 crc kubenswrapper[4731]: E1125 15:06:47.189751 4731 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Nov 25 15:06:47 crc kubenswrapper[4731]: E1125 15:06:47.189826 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f5cb68e-43ff-4a3a-9917-b6b8349ddb03-plugin-serving-cert podName:6f5cb68e-43ff-4a3a-9917-b6b8349ddb03 nodeName:}" failed. No retries permitted until 2025-11-25 15:06:47.689793783 +0000 UTC m=+766.436928213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/6f5cb68e-43ff-4a3a-9917-b6b8349ddb03-plugin-serving-cert") pod "nmstate-console-plugin-5874bd7bc5-p7zbr" (UID: "6f5cb68e-43ff-4a3a-9917-b6b8349ddb03") : secret "plugin-serving-cert" not found Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.190884 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6f5cb68e-43ff-4a3a-9917-b6b8349ddb03-nginx-conf\") pod \"nmstate-console-plugin-5874bd7bc5-p7zbr\" (UID: \"6f5cb68e-43ff-4a3a-9917-b6b8349ddb03\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.216855 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-269gr\" (UniqueName: \"kubernetes.io/projected/6f5cb68e-43ff-4a3a-9917-b6b8349ddb03-kube-api-access-269gr\") pod \"nmstate-console-plugin-5874bd7bc5-p7zbr\" (UID: \"6f5cb68e-43ff-4a3a-9917-b6b8349ddb03\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.291675 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f628a517-51b2-4f83-85b3-6c4c7c145a0f-service-ca\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.292174 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f628a517-51b2-4f83-85b3-6c4c7c145a0f-console-oauth-config\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.292215 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f628a517-51b2-4f83-85b3-6c4c7c145a0f-console-config\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.292283 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f628a517-51b2-4f83-85b3-6c4c7c145a0f-oauth-serving-cert\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.292309 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp7xc\" (UniqueName: \"kubernetes.io/projected/f628a517-51b2-4f83-85b3-6c4c7c145a0f-kube-api-access-dp7xc\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.292331 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f628a517-51b2-4f83-85b3-6c4c7c145a0f-trusted-ca-bundle\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.292347 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f628a517-51b2-4f83-85b3-6c4c7c145a0f-console-serving-cert\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.393354 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f628a517-51b2-4f83-85b3-6c4c7c145a0f-oauth-serving-cert\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.393421 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp7xc\" (UniqueName: \"kubernetes.io/projected/f628a517-51b2-4f83-85b3-6c4c7c145a0f-kube-api-access-dp7xc\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.393468 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f628a517-51b2-4f83-85b3-6c4c7c145a0f-trusted-ca-bundle\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.393487 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f628a517-51b2-4f83-85b3-6c4c7c145a0f-console-serving-cert\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.393507 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f628a517-51b2-4f83-85b3-6c4c7c145a0f-service-ca\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.393554 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f628a517-51b2-4f83-85b3-6c4c7c145a0f-console-oauth-config\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.393589 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f628a517-51b2-4f83-85b3-6c4c7c145a0f-console-config\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.394195 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f628a517-51b2-4f83-85b3-6c4c7c145a0f-oauth-serving-cert\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.394591 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f628a517-51b2-4f83-85b3-6c4c7c145a0f-service-ca\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.395318 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f628a517-51b2-4f83-85b3-6c4c7c145a0f-trusted-ca-bundle\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.397864 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f628a517-51b2-4f83-85b3-6c4c7c145a0f-console-oauth-config\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.398084 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f628a517-51b2-4f83-85b3-6c4c7c145a0f-console-serving-cert\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.398257 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f628a517-51b2-4f83-85b3-6c4c7c145a0f-console-config\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.409204 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp7xc\" (UniqueName: \"kubernetes.io/projected/f628a517-51b2-4f83-85b3-6c4c7c145a0f-kube-api-access-dp7xc\") pod \"console-68954d78db-ntdxj\" (UID: \"f628a517-51b2-4f83-85b3-6c4c7c145a0f\") " pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.484345 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.571916 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-5dcf9c57c5-qjzpd"] Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.583547 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6b89b748d8-rpjkc"] Nov 25 15:06:47 crc kubenswrapper[4731]: W1125 15:06:47.597665 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60a0d26a_381d_43f4_9cd1_e9172996b0e2.slice/crio-53b6dc2958ecefc844339c2139498930307298ba694f0e3f2eb1fe1bf56b408d WatchSource:0}: Error finding container 53b6dc2958ecefc844339c2139498930307298ba694f0e3f2eb1fe1bf56b408d: Status 404 returned error can't find the container with id 53b6dc2958ecefc844339c2139498930307298ba694f0e3f2eb1fe1bf56b408d Nov 25 15:06:47 crc kubenswrapper[4731]: W1125 15:06:47.600940 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc271b7db_3cd3_4045_a954_497f15d9f89d.slice/crio-4f819594a08a2ab383a36f76e7afb739fe1531c8466ec924e7e9b9ae32dd8178 WatchSource:0}: Error finding container 4f819594a08a2ab383a36f76e7afb739fe1531c8466ec924e7e9b9ae32dd8178: Status 404 returned error can't find the container with id 4f819594a08a2ab383a36f76e7afb739fe1531c8466ec924e7e9b9ae32dd8178 Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.698471 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f5cb68e-43ff-4a3a-9917-b6b8349ddb03-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-p7zbr\" (UID: \"6f5cb68e-43ff-4a3a-9917-b6b8349ddb03\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.702889 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6f5cb68e-43ff-4a3a-9917-b6b8349ddb03-plugin-serving-cert\") pod \"nmstate-console-plugin-5874bd7bc5-p7zbr\" (UID: \"6f5cb68e-43ff-4a3a-9917-b6b8349ddb03\") " pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr" Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.737111 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rpjkc" event={"ID":"c271b7db-3cd3-4045-a954-497f15d9f89d","Type":"ContainerStarted","Data":"4f819594a08a2ab383a36f76e7afb739fe1531c8466ec924e7e9b9ae32dd8178"} Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.738112 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-qjzpd" event={"ID":"60a0d26a-381d-43f4-9cd1-e9172996b0e2","Type":"ContainerStarted","Data":"53b6dc2958ecefc844339c2139498930307298ba694f0e3f2eb1fe1bf56b408d"} Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.740057 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-58n5h" event={"ID":"39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9","Type":"ContainerStarted","Data":"2c1fbef2144e3b8cfbfa936064f41af45b8f945ad48022b33dc94ae4cfbeddb8"} Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.742484 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qzqp4" event={"ID":"7ac8f507-158c-4c89-9c07-f64a08a42283","Type":"ContainerStarted","Data":"b30d4ef6e2ce4414cb7c19f345ddc54f39aaa14e8ce6da09dc2ef0912696c2eb"} Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.752777 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-68954d78db-ntdxj"] Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.763320 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qzqp4" podStartSLOduration=1.904254806 podStartE2EDuration="14.763299699s" podCreationTimestamp="2025-11-25 15:06:33 +0000 UTC" firstStartedPulling="2025-11-25 15:06:34.65021285 +0000 UTC m=+753.397347270" lastFinishedPulling="2025-11-25 15:06:47.509257733 +0000 UTC m=+766.256392163" observedRunningTime="2025-11-25 15:06:47.762059908 +0000 UTC m=+766.509194338" watchObservedRunningTime="2025-11-25 15:06:47.763299699 +0000 UTC m=+766.510434149" Nov 25 15:06:47 crc kubenswrapper[4731]: W1125 15:06:47.774214 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf628a517_51b2_4f83_85b3_6c4c7c145a0f.slice/crio-0e463c59b21a2bb8e2009e9ea63bc563acc26b1d0633b387a8bda59bc2b08727 WatchSource:0}: Error finding container 0e463c59b21a2bb8e2009e9ea63bc563acc26b1d0633b387a8bda59bc2b08727: Status 404 returned error can't find the container with id 0e463c59b21a2bb8e2009e9ea63bc563acc26b1d0633b387a8bda59bc2b08727 Nov 25 15:06:47 crc kubenswrapper[4731]: I1125 15:06:47.839483 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr" Nov 25 15:06:48 crc kubenswrapper[4731]: I1125 15:06:48.256502 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr"] Nov 25 15:06:48 crc kubenswrapper[4731]: W1125 15:06:48.264171 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f5cb68e_43ff_4a3a_9917_b6b8349ddb03.slice/crio-0d579ebe7714622b51064d93d63b2382c2accbf8a29abab68b6c06e5dbd84a21 WatchSource:0}: Error finding container 0d579ebe7714622b51064d93d63b2382c2accbf8a29abab68b6c06e5dbd84a21: Status 404 returned error can't find the container with id 0d579ebe7714622b51064d93d63b2382c2accbf8a29abab68b6c06e5dbd84a21 Nov 25 15:06:48 crc kubenswrapper[4731]: I1125 15:06:48.750298 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr" event={"ID":"6f5cb68e-43ff-4a3a-9917-b6b8349ddb03","Type":"ContainerStarted","Data":"0d579ebe7714622b51064d93d63b2382c2accbf8a29abab68b6c06e5dbd84a21"} Nov 25 15:06:48 crc kubenswrapper[4731]: I1125 15:06:48.751670 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68954d78db-ntdxj" event={"ID":"f628a517-51b2-4f83-85b3-6c4c7c145a0f","Type":"ContainerStarted","Data":"254aa266d56c45695c29abcbbd846f97ff2b0967a23724ae8913b83319b0f104"} Nov 25 15:06:48 crc kubenswrapper[4731]: I1125 15:06:48.752282 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-68954d78db-ntdxj" event={"ID":"f628a517-51b2-4f83-85b3-6c4c7c145a0f","Type":"ContainerStarted","Data":"0e463c59b21a2bb8e2009e9ea63bc563acc26b1d0633b387a8bda59bc2b08727"} Nov 25 15:06:48 crc kubenswrapper[4731]: I1125 15:06:48.774269 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-68954d78db-ntdxj" podStartSLOduration=1.774242322 podStartE2EDuration="1.774242322s" podCreationTimestamp="2025-11-25 15:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:06:48.769938312 +0000 UTC m=+767.517072762" watchObservedRunningTime="2025-11-25 15:06:48.774242322 +0000 UTC m=+767.521376752" Nov 25 15:06:50 crc kubenswrapper[4731]: I1125 15:06:50.040454 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:06:50 crc kubenswrapper[4731]: I1125 15:06:50.041344 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:06:53 crc kubenswrapper[4731]: I1125 15:06:53.586011 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qzqp4" Nov 25 15:06:53 crc kubenswrapper[4731]: I1125 15:06:53.586604 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qzqp4" Nov 25 15:06:53 crc kubenswrapper[4731]: I1125 15:06:53.635928 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qzqp4" Nov 25 15:06:53 crc kubenswrapper[4731]: I1125 15:06:53.823390 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qzqp4" Nov 25 15:06:53 crc kubenswrapper[4731]: I1125 15:06:53.972089 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qzqp4"] Nov 25 15:06:54 crc kubenswrapper[4731]: I1125 15:06:54.003593 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gq8m6"] Nov 25 15:06:54 crc kubenswrapper[4731]: I1125 15:06:54.004017 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gq8m6" podUID="3caecded-c24c-4f5f-8083-7dd157b4a52d" containerName="registry-server" containerID="cri-o://3657e5d0cf5363254eac7bc1fc9c141ea4aee505dfca71f128a1a159d3e0eefa" gracePeriod=2 Nov 25 15:06:54 crc kubenswrapper[4731]: I1125 15:06:54.793620 4731 generic.go:334] "Generic (PLEG): container finished" podID="3caecded-c24c-4f5f-8083-7dd157b4a52d" containerID="3657e5d0cf5363254eac7bc1fc9c141ea4aee505dfca71f128a1a159d3e0eefa" exitCode=0 Nov 25 15:06:54 crc kubenswrapper[4731]: I1125 15:06:54.793700 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gq8m6" event={"ID":"3caecded-c24c-4f5f-8083-7dd157b4a52d","Type":"ContainerDied","Data":"3657e5d0cf5363254eac7bc1fc9c141ea4aee505dfca71f128a1a159d3e0eefa"} Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.156530 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.315385 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3caecded-c24c-4f5f-8083-7dd157b4a52d-utilities\") pod \"3caecded-c24c-4f5f-8083-7dd157b4a52d\" (UID: \"3caecded-c24c-4f5f-8083-7dd157b4a52d\") " Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.315751 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3caecded-c24c-4f5f-8083-7dd157b4a52d-catalog-content\") pod \"3caecded-c24c-4f5f-8083-7dd157b4a52d\" (UID: \"3caecded-c24c-4f5f-8083-7dd157b4a52d\") " Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.315781 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z5c6\" (UniqueName: \"kubernetes.io/projected/3caecded-c24c-4f5f-8083-7dd157b4a52d-kube-api-access-8z5c6\") pod \"3caecded-c24c-4f5f-8083-7dd157b4a52d\" (UID: \"3caecded-c24c-4f5f-8083-7dd157b4a52d\") " Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.316388 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3caecded-c24c-4f5f-8083-7dd157b4a52d-utilities" (OuterVolumeSpecName: "utilities") pod "3caecded-c24c-4f5f-8083-7dd157b4a52d" (UID: "3caecded-c24c-4f5f-8083-7dd157b4a52d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.330232 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3caecded-c24c-4f5f-8083-7dd157b4a52d-kube-api-access-8z5c6" (OuterVolumeSpecName: "kube-api-access-8z5c6") pod "3caecded-c24c-4f5f-8083-7dd157b4a52d" (UID: "3caecded-c24c-4f5f-8083-7dd157b4a52d"). InnerVolumeSpecName "kube-api-access-8z5c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.411279 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3caecded-c24c-4f5f-8083-7dd157b4a52d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3caecded-c24c-4f5f-8083-7dd157b4a52d" (UID: "3caecded-c24c-4f5f-8083-7dd157b4a52d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.417852 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3caecded-c24c-4f5f-8083-7dd157b4a52d-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.417913 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z5c6\" (UniqueName: \"kubernetes.io/projected/3caecded-c24c-4f5f-8083-7dd157b4a52d-kube-api-access-8z5c6\") on node \"crc\" DevicePath \"\"" Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.417933 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3caecded-c24c-4f5f-8083-7dd157b4a52d-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.804364 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gq8m6" event={"ID":"3caecded-c24c-4f5f-8083-7dd157b4a52d","Type":"ContainerDied","Data":"bb2ae99bdeb629e7b57071e392407f5d8e3162ab7ce292b300ecb4cf26f8b486"} Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.804919 4731 scope.go:117] "RemoveContainer" containerID="3657e5d0cf5363254eac7bc1fc9c141ea4aee505dfca71f128a1a159d3e0eefa" Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.805125 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gq8m6" Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.819890 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-qjzpd" event={"ID":"60a0d26a-381d-43f4-9cd1-e9172996b0e2","Type":"ContainerStarted","Data":"60bd19ce3b19db1273cc7d8bd11836d983ef98bd5fb0577350fff3914beb0674"} Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.828234 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-58n5h" event={"ID":"39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9","Type":"ContainerStarted","Data":"3aaff4349413d1eaa0112769aa8f41dd94d719d02e3d31bdbef4d77720801306"} Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.828345 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-58n5h" Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.835183 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rpjkc" event={"ID":"c271b7db-3cd3-4045-a954-497f15d9f89d","Type":"ContainerStarted","Data":"dd1e27f558a58f9b9580e57c6ddbfd8e4a04002a033c349d8cb2816f35b1deab"} Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.835374 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rpjkc" Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.856884 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gq8m6"] Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.866440 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gq8m6"] Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.869591 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rpjkc" podStartSLOduration=2.554783659 podStartE2EDuration="9.869562263s" podCreationTimestamp="2025-11-25 15:06:46 +0000 UTC" firstStartedPulling="2025-11-25 15:06:47.602927358 +0000 UTC m=+766.350061788" lastFinishedPulling="2025-11-25 15:06:54.917705962 +0000 UTC m=+773.664840392" observedRunningTime="2025-11-25 15:06:55.862468711 +0000 UTC m=+774.609603141" watchObservedRunningTime="2025-11-25 15:06:55.869562263 +0000 UTC m=+774.616696683" Nov 25 15:06:55 crc kubenswrapper[4731]: I1125 15:06:55.900577 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-58n5h" podStartSLOduration=2.198465588 podStartE2EDuration="9.900558316s" podCreationTimestamp="2025-11-25 15:06:46 +0000 UTC" firstStartedPulling="2025-11-25 15:06:47.200064546 +0000 UTC m=+765.947198976" lastFinishedPulling="2025-11-25 15:06:54.902157274 +0000 UTC m=+773.649291704" observedRunningTime="2025-11-25 15:06:55.897286942 +0000 UTC m=+774.644421392" watchObservedRunningTime="2025-11-25 15:06:55.900558316 +0000 UTC m=+774.647692746" Nov 25 15:06:56 crc kubenswrapper[4731]: I1125 15:06:56.760044 4731 scope.go:117] "RemoveContainer" containerID="51df46204887a815c75a4aa7b8999e07cca0471655a8af046f7bdf2c1d8e3bd7" Nov 25 15:06:56 crc kubenswrapper[4731]: I1125 15:06:56.778789 4731 scope.go:117] "RemoveContainer" containerID="36d373a0aaa1a06213d3a37bb3b6f6379b3ed76f8630e3a2621e541982418dbd" Nov 25 15:06:56 crc kubenswrapper[4731]: I1125 15:06:56.830229 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3caecded-c24c-4f5f-8083-7dd157b4a52d" path="/var/lib/kubelet/pods/3caecded-c24c-4f5f-8083-7dd157b4a52d/volumes" Nov 25 15:06:57 crc kubenswrapper[4731]: I1125 15:06:57.485501 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:57 crc kubenswrapper[4731]: I1125 15:06:57.485854 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:57 crc kubenswrapper[4731]: I1125 15:06:57.492123 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:57 crc kubenswrapper[4731]: I1125 15:06:57.862104 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr" event={"ID":"6f5cb68e-43ff-4a3a-9917-b6b8349ddb03","Type":"ContainerStarted","Data":"c70e46000b9a0bc2e71aec0df631838e95d5294b0f000bcb70c997b04ba16a27"} Nov 25 15:06:57 crc kubenswrapper[4731]: I1125 15:06:57.915864 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-68954d78db-ntdxj" Nov 25 15:06:57 crc kubenswrapper[4731]: I1125 15:06:57.955299 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5874bd7bc5-p7zbr" podStartSLOduration=3.357608015 podStartE2EDuration="11.955275324s" podCreationTimestamp="2025-11-25 15:06:46 +0000 UTC" firstStartedPulling="2025-11-25 15:06:48.266130038 +0000 UTC m=+767.013264468" lastFinishedPulling="2025-11-25 15:06:56.863797337 +0000 UTC m=+775.610931777" observedRunningTime="2025-11-25 15:06:57.9282482 +0000 UTC m=+776.675382640" watchObservedRunningTime="2025-11-25 15:06:57.955275324 +0000 UTC m=+776.702409754" Nov 25 15:06:57 crc kubenswrapper[4731]: I1125 15:06:57.994807 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dhtk5"] Nov 25 15:06:58 crc kubenswrapper[4731]: I1125 15:06:58.872799 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-qjzpd" event={"ID":"60a0d26a-381d-43f4-9cd1-e9172996b0e2","Type":"ContainerStarted","Data":"5148ea1147e012a316f56c495bf1adc4b8624184fcbbddaef1dc2910c8b2d117"} Nov 25 15:06:58 crc kubenswrapper[4731]: I1125 15:06:58.891556 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-5dcf9c57c5-qjzpd" podStartSLOduration=2.005193389 podStartE2EDuration="12.891534701s" podCreationTimestamp="2025-11-25 15:06:46 +0000 UTC" firstStartedPulling="2025-11-25 15:06:47.602322013 +0000 UTC m=+766.349456443" lastFinishedPulling="2025-11-25 15:06:58.488663335 +0000 UTC m=+777.235797755" observedRunningTime="2025-11-25 15:06:58.888528645 +0000 UTC m=+777.635663075" watchObservedRunningTime="2025-11-25 15:06:58.891534701 +0000 UTC m=+777.638669141" Nov 25 15:07:02 crc kubenswrapper[4731]: I1125 15:07:02.208962 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-58n5h" Nov 25 15:07:07 crc kubenswrapper[4731]: I1125 15:07:07.112429 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6b89b748d8-rpjkc" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.551770 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l"] Nov 25 15:07:19 crc kubenswrapper[4731]: E1125 15:07:19.553025 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3caecded-c24c-4f5f-8083-7dd157b4a52d" containerName="extract-content" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.553049 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3caecded-c24c-4f5f-8083-7dd157b4a52d" containerName="extract-content" Nov 25 15:07:19 crc kubenswrapper[4731]: E1125 15:07:19.553081 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3caecded-c24c-4f5f-8083-7dd157b4a52d" containerName="registry-server" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.553095 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3caecded-c24c-4f5f-8083-7dd157b4a52d" containerName="registry-server" Nov 25 15:07:19 crc kubenswrapper[4731]: E1125 15:07:19.553109 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3caecded-c24c-4f5f-8083-7dd157b4a52d" containerName="extract-utilities" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.553122 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3caecded-c24c-4f5f-8083-7dd157b4a52d" containerName="extract-utilities" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.553298 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="3caecded-c24c-4f5f-8083-7dd157b4a52d" containerName="registry-server" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.554992 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.559724 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.567556 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l"] Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.702903 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97357c05-65a9-4732-b01e-651eb43fe514-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l\" (UID: \"97357c05-65a9-4732-b01e-651eb43fe514\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.703243 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwwtm\" (UniqueName: \"kubernetes.io/projected/97357c05-65a9-4732-b01e-651eb43fe514-kube-api-access-cwwtm\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l\" (UID: \"97357c05-65a9-4732-b01e-651eb43fe514\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.703361 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97357c05-65a9-4732-b01e-651eb43fe514-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l\" (UID: \"97357c05-65a9-4732-b01e-651eb43fe514\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.804913 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97357c05-65a9-4732-b01e-651eb43fe514-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l\" (UID: \"97357c05-65a9-4732-b01e-651eb43fe514\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.805098 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwwtm\" (UniqueName: \"kubernetes.io/projected/97357c05-65a9-4732-b01e-651eb43fe514-kube-api-access-cwwtm\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l\" (UID: \"97357c05-65a9-4732-b01e-651eb43fe514\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.805151 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97357c05-65a9-4732-b01e-651eb43fe514-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l\" (UID: \"97357c05-65a9-4732-b01e-651eb43fe514\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.806171 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97357c05-65a9-4732-b01e-651eb43fe514-util\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l\" (UID: \"97357c05-65a9-4732-b01e-651eb43fe514\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.806171 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97357c05-65a9-4732-b01e-651eb43fe514-bundle\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l\" (UID: \"97357c05-65a9-4732-b01e-651eb43fe514\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.827086 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwwtm\" (UniqueName: \"kubernetes.io/projected/97357c05-65a9-4732-b01e-651eb43fe514-kube-api-access-cwwtm\") pod \"e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l\" (UID: \"97357c05-65a9-4732-b01e-651eb43fe514\") " pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" Nov 25 15:07:19 crc kubenswrapper[4731]: I1125 15:07:19.876840 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" Nov 25 15:07:20 crc kubenswrapper[4731]: I1125 15:07:20.039993 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:07:20 crc kubenswrapper[4731]: I1125 15:07:20.040502 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:07:20 crc kubenswrapper[4731]: I1125 15:07:20.301707 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l"] Nov 25 15:07:21 crc kubenswrapper[4731]: I1125 15:07:21.039294 4731 generic.go:334] "Generic (PLEG): container finished" podID="97357c05-65a9-4732-b01e-651eb43fe514" containerID="14089c94a993b2223e64f74e750ee59bfe3254611d95c75c38e396d3d1d1133a" exitCode=0 Nov 25 15:07:21 crc kubenswrapper[4731]: I1125 15:07:21.039352 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" event={"ID":"97357c05-65a9-4732-b01e-651eb43fe514","Type":"ContainerDied","Data":"14089c94a993b2223e64f74e750ee59bfe3254611d95c75c38e396d3d1d1133a"} Nov 25 15:07:21 crc kubenswrapper[4731]: I1125 15:07:21.039386 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" event={"ID":"97357c05-65a9-4732-b01e-651eb43fe514","Type":"ContainerStarted","Data":"f93dfc2262223b0586f1872a74b52c12a440f679a694d64b823aed49a2a127b5"} Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.051512 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-dhtk5" podUID="5a457d06-8bf4-4077-a3e9-d25ff971832b" containerName="console" containerID="cri-o://e469a26dec6f6c2a823b1b774c101979b33e112778d5c6039b572a9f6aaf9a26" gracePeriod=15 Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.433206 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dhtk5_5a457d06-8bf4-4077-a3e9-d25ff971832b/console/0.log" Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.433951 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.500131 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-oauth-serving-cert\") pod \"5a457d06-8bf4-4077-a3e9-d25ff971832b\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.500271 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-service-ca\") pod \"5a457d06-8bf4-4077-a3e9-d25ff971832b\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.500301 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-serving-cert\") pod \"5a457d06-8bf4-4077-a3e9-d25ff971832b\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.500332 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-config\") pod \"5a457d06-8bf4-4077-a3e9-d25ff971832b\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.501268 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-service-ca" (OuterVolumeSpecName: "service-ca") pod "5a457d06-8bf4-4077-a3e9-d25ff971832b" (UID: "5a457d06-8bf4-4077-a3e9-d25ff971832b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.501594 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-config" (OuterVolumeSpecName: "console-config") pod "5a457d06-8bf4-4077-a3e9-d25ff971832b" (UID: "5a457d06-8bf4-4077-a3e9-d25ff971832b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.501636 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "5a457d06-8bf4-4077-a3e9-d25ff971832b" (UID: "5a457d06-8bf4-4077-a3e9-d25ff971832b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.506719 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "5a457d06-8bf4-4077-a3e9-d25ff971832b" (UID: "5a457d06-8bf4-4077-a3e9-d25ff971832b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.601582 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-oauth-config\") pod \"5a457d06-8bf4-4077-a3e9-d25ff971832b\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.601650 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfplq\" (UniqueName: \"kubernetes.io/projected/5a457d06-8bf4-4077-a3e9-d25ff971832b-kube-api-access-jfplq\") pod \"5a457d06-8bf4-4077-a3e9-d25ff971832b\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.601693 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-trusted-ca-bundle\") pod \"5a457d06-8bf4-4077-a3e9-d25ff971832b\" (UID: \"5a457d06-8bf4-4077-a3e9-d25ff971832b\") " Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.601950 4731 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.601968 4731 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-service-ca\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.601984 4731 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.601996 4731 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.602573 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "5a457d06-8bf4-4077-a3e9-d25ff971832b" (UID: "5a457d06-8bf4-4077-a3e9-d25ff971832b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.604572 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "5a457d06-8bf4-4077-a3e9-d25ff971832b" (UID: "5a457d06-8bf4-4077-a3e9-d25ff971832b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.604999 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a457d06-8bf4-4077-a3e9-d25ff971832b-kube-api-access-jfplq" (OuterVolumeSpecName: "kube-api-access-jfplq") pod "5a457d06-8bf4-4077-a3e9-d25ff971832b" (UID: "5a457d06-8bf4-4077-a3e9-d25ff971832b"). InnerVolumeSpecName "kube-api-access-jfplq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.703308 4731 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5a457d06-8bf4-4077-a3e9-d25ff971832b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.703365 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfplq\" (UniqueName: \"kubernetes.io/projected/5a457d06-8bf4-4077-a3e9-d25ff971832b-kube-api-access-jfplq\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:23 crc kubenswrapper[4731]: I1125 15:07:23.703382 4731 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a457d06-8bf4-4077-a3e9-d25ff971832b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:24 crc kubenswrapper[4731]: I1125 15:07:24.063678 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dhtk5_5a457d06-8bf4-4077-a3e9-d25ff971832b/console/0.log" Nov 25 15:07:24 crc kubenswrapper[4731]: I1125 15:07:24.065380 4731 generic.go:334] "Generic (PLEG): container finished" podID="5a457d06-8bf4-4077-a3e9-d25ff971832b" containerID="e469a26dec6f6c2a823b1b774c101979b33e112778d5c6039b572a9f6aaf9a26" exitCode=2 Nov 25 15:07:24 crc kubenswrapper[4731]: I1125 15:07:24.065544 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dhtk5" Nov 25 15:07:24 crc kubenswrapper[4731]: I1125 15:07:24.065522 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dhtk5" event={"ID":"5a457d06-8bf4-4077-a3e9-d25ff971832b","Type":"ContainerDied","Data":"e469a26dec6f6c2a823b1b774c101979b33e112778d5c6039b572a9f6aaf9a26"} Nov 25 15:07:24 crc kubenswrapper[4731]: I1125 15:07:24.065725 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dhtk5" event={"ID":"5a457d06-8bf4-4077-a3e9-d25ff971832b","Type":"ContainerDied","Data":"0cbe87e914e570842cf4755d02b11f7a770ad6fe1f4973eb0adf005821b3f2be"} Nov 25 15:07:24 crc kubenswrapper[4731]: I1125 15:07:24.065756 4731 scope.go:117] "RemoveContainer" containerID="e469a26dec6f6c2a823b1b774c101979b33e112778d5c6039b572a9f6aaf9a26" Nov 25 15:07:24 crc kubenswrapper[4731]: I1125 15:07:24.070781 4731 generic.go:334] "Generic (PLEG): container finished" podID="97357c05-65a9-4732-b01e-651eb43fe514" containerID="3eae8ed9e78d725eae5bc41a3290291d950393a820ac138098b45cb8d8f96c89" exitCode=0 Nov 25 15:07:24 crc kubenswrapper[4731]: I1125 15:07:24.070856 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" event={"ID":"97357c05-65a9-4732-b01e-651eb43fe514","Type":"ContainerDied","Data":"3eae8ed9e78d725eae5bc41a3290291d950393a820ac138098b45cb8d8f96c89"} Nov 25 15:07:24 crc kubenswrapper[4731]: I1125 15:07:24.128517 4731 scope.go:117] "RemoveContainer" containerID="e469a26dec6f6c2a823b1b774c101979b33e112778d5c6039b572a9f6aaf9a26" Nov 25 15:07:24 crc kubenswrapper[4731]: E1125 15:07:24.129762 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e469a26dec6f6c2a823b1b774c101979b33e112778d5c6039b572a9f6aaf9a26\": container with ID starting with e469a26dec6f6c2a823b1b774c101979b33e112778d5c6039b572a9f6aaf9a26 not found: ID does not exist" containerID="e469a26dec6f6c2a823b1b774c101979b33e112778d5c6039b572a9f6aaf9a26" Nov 25 15:07:24 crc kubenswrapper[4731]: I1125 15:07:24.129838 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e469a26dec6f6c2a823b1b774c101979b33e112778d5c6039b572a9f6aaf9a26"} err="failed to get container status \"e469a26dec6f6c2a823b1b774c101979b33e112778d5c6039b572a9f6aaf9a26\": rpc error: code = NotFound desc = could not find container \"e469a26dec6f6c2a823b1b774c101979b33e112778d5c6039b572a9f6aaf9a26\": container with ID starting with e469a26dec6f6c2a823b1b774c101979b33e112778d5c6039b572a9f6aaf9a26 not found: ID does not exist" Nov 25 15:07:24 crc kubenswrapper[4731]: I1125 15:07:24.137023 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dhtk5"] Nov 25 15:07:24 crc kubenswrapper[4731]: I1125 15:07:24.144498 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-dhtk5"] Nov 25 15:07:24 crc kubenswrapper[4731]: I1125 15:07:24.823538 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a457d06-8bf4-4077-a3e9-d25ff971832b" path="/var/lib/kubelet/pods/5a457d06-8bf4-4077-a3e9-d25ff971832b/volumes" Nov 25 15:07:25 crc kubenswrapper[4731]: I1125 15:07:25.079935 4731 generic.go:334] "Generic (PLEG): container finished" podID="97357c05-65a9-4732-b01e-651eb43fe514" containerID="3ade5853befb5b3dfaa9b8f553d1d4ed52a9ed8057342980cdfd6ac4b47c3778" exitCode=0 Nov 25 15:07:25 crc kubenswrapper[4731]: I1125 15:07:25.079986 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" event={"ID":"97357c05-65a9-4732-b01e-651eb43fe514","Type":"ContainerDied","Data":"3ade5853befb5b3dfaa9b8f553d1d4ed52a9ed8057342980cdfd6ac4b47c3778"} Nov 25 15:07:26 crc kubenswrapper[4731]: I1125 15:07:26.404151 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" Nov 25 15:07:26 crc kubenswrapper[4731]: I1125 15:07:26.555593 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwwtm\" (UniqueName: \"kubernetes.io/projected/97357c05-65a9-4732-b01e-651eb43fe514-kube-api-access-cwwtm\") pod \"97357c05-65a9-4732-b01e-651eb43fe514\" (UID: \"97357c05-65a9-4732-b01e-651eb43fe514\") " Nov 25 15:07:26 crc kubenswrapper[4731]: I1125 15:07:26.555654 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97357c05-65a9-4732-b01e-651eb43fe514-bundle\") pod \"97357c05-65a9-4732-b01e-651eb43fe514\" (UID: \"97357c05-65a9-4732-b01e-651eb43fe514\") " Nov 25 15:07:26 crc kubenswrapper[4731]: I1125 15:07:26.555770 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97357c05-65a9-4732-b01e-651eb43fe514-util\") pod \"97357c05-65a9-4732-b01e-651eb43fe514\" (UID: \"97357c05-65a9-4732-b01e-651eb43fe514\") " Nov 25 15:07:26 crc kubenswrapper[4731]: I1125 15:07:26.557002 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97357c05-65a9-4732-b01e-651eb43fe514-bundle" (OuterVolumeSpecName: "bundle") pod "97357c05-65a9-4732-b01e-651eb43fe514" (UID: "97357c05-65a9-4732-b01e-651eb43fe514"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:07:26 crc kubenswrapper[4731]: I1125 15:07:26.564303 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97357c05-65a9-4732-b01e-651eb43fe514-kube-api-access-cwwtm" (OuterVolumeSpecName: "kube-api-access-cwwtm") pod "97357c05-65a9-4732-b01e-651eb43fe514" (UID: "97357c05-65a9-4732-b01e-651eb43fe514"). InnerVolumeSpecName "kube-api-access-cwwtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:07:26 crc kubenswrapper[4731]: I1125 15:07:26.565908 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97357c05-65a9-4732-b01e-651eb43fe514-util" (OuterVolumeSpecName: "util") pod "97357c05-65a9-4732-b01e-651eb43fe514" (UID: "97357c05-65a9-4732-b01e-651eb43fe514"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:07:26 crc kubenswrapper[4731]: I1125 15:07:26.657674 4731 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/97357c05-65a9-4732-b01e-651eb43fe514-util\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:26 crc kubenswrapper[4731]: I1125 15:07:26.657717 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwwtm\" (UniqueName: \"kubernetes.io/projected/97357c05-65a9-4732-b01e-651eb43fe514-kube-api-access-cwwtm\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:26 crc kubenswrapper[4731]: I1125 15:07:26.657732 4731 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/97357c05-65a9-4732-b01e-651eb43fe514-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:07:27 crc kubenswrapper[4731]: I1125 15:07:27.099268 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" event={"ID":"97357c05-65a9-4732-b01e-651eb43fe514","Type":"ContainerDied","Data":"f93dfc2262223b0586f1872a74b52c12a440f679a694d64b823aed49a2a127b5"} Nov 25 15:07:27 crc kubenswrapper[4731]: I1125 15:07:27.099335 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f93dfc2262223b0586f1872a74b52c12a440f679a694d64b823aed49a2a127b5" Nov 25 15:07:27 crc kubenswrapper[4731]: I1125 15:07:27.099405 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.462871 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm"] Nov 25 15:07:37 crc kubenswrapper[4731]: E1125 15:07:37.463969 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97357c05-65a9-4732-b01e-651eb43fe514" containerName="extract" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.463986 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="97357c05-65a9-4732-b01e-651eb43fe514" containerName="extract" Nov 25 15:07:37 crc kubenswrapper[4731]: E1125 15:07:37.464008 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97357c05-65a9-4732-b01e-651eb43fe514" containerName="util" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.464015 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="97357c05-65a9-4732-b01e-651eb43fe514" containerName="util" Nov 25 15:07:37 crc kubenswrapper[4731]: E1125 15:07:37.464030 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a457d06-8bf4-4077-a3e9-d25ff971832b" containerName="console" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.464037 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a457d06-8bf4-4077-a3e9-d25ff971832b" containerName="console" Nov 25 15:07:37 crc kubenswrapper[4731]: E1125 15:07:37.464050 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97357c05-65a9-4732-b01e-651eb43fe514" containerName="pull" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.464058 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="97357c05-65a9-4732-b01e-651eb43fe514" containerName="pull" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.464208 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="97357c05-65a9-4732-b01e-651eb43fe514" containerName="extract" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.464226 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a457d06-8bf4-4077-a3e9-d25ff971832b" containerName="console" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.464772 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.469114 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.469175 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.469231 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-4j7gr" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.469727 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.472477 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.491367 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm"] Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.653369 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wws5f\" (UniqueName: \"kubernetes.io/projected/b6957d61-fdfc-49b1-8721-a14991fc51c5-kube-api-access-wws5f\") pod \"metallb-operator-controller-manager-6bc94d877-jvccm\" (UID: \"b6957d61-fdfc-49b1-8721-a14991fc51c5\") " pod="metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.653444 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6957d61-fdfc-49b1-8721-a14991fc51c5-apiservice-cert\") pod \"metallb-operator-controller-manager-6bc94d877-jvccm\" (UID: \"b6957d61-fdfc-49b1-8721-a14991fc51c5\") " pod="metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.653515 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6957d61-fdfc-49b1-8721-a14991fc51c5-webhook-cert\") pod \"metallb-operator-controller-manager-6bc94d877-jvccm\" (UID: \"b6957d61-fdfc-49b1-8721-a14991fc51c5\") " pod="metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.754635 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wws5f\" (UniqueName: \"kubernetes.io/projected/b6957d61-fdfc-49b1-8721-a14991fc51c5-kube-api-access-wws5f\") pod \"metallb-operator-controller-manager-6bc94d877-jvccm\" (UID: \"b6957d61-fdfc-49b1-8721-a14991fc51c5\") " pod="metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.754696 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6957d61-fdfc-49b1-8721-a14991fc51c5-apiservice-cert\") pod \"metallb-operator-controller-manager-6bc94d877-jvccm\" (UID: \"b6957d61-fdfc-49b1-8721-a14991fc51c5\") " pod="metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.754769 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6957d61-fdfc-49b1-8721-a14991fc51c5-webhook-cert\") pod \"metallb-operator-controller-manager-6bc94d877-jvccm\" (UID: \"b6957d61-fdfc-49b1-8721-a14991fc51c5\") " pod="metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.756333 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4"] Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.757137 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.763080 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b6957d61-fdfc-49b1-8721-a14991fc51c5-webhook-cert\") pod \"metallb-operator-controller-manager-6bc94d877-jvccm\" (UID: \"b6957d61-fdfc-49b1-8721-a14991fc51c5\") " pod="metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.765892 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-mtzms" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.766470 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.766989 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.770432 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b6957d61-fdfc-49b1-8721-a14991fc51c5-apiservice-cert\") pod \"metallb-operator-controller-manager-6bc94d877-jvccm\" (UID: \"b6957d61-fdfc-49b1-8721-a14991fc51c5\") " pod="metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.792698 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wws5f\" (UniqueName: \"kubernetes.io/projected/b6957d61-fdfc-49b1-8721-a14991fc51c5-kube-api-access-wws5f\") pod \"metallb-operator-controller-manager-6bc94d877-jvccm\" (UID: \"b6957d61-fdfc-49b1-8721-a14991fc51c5\") " pod="metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.794869 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4"] Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.795415 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.856186 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdfng\" (UniqueName: \"kubernetes.io/projected/af60cb8b-34de-4af2-a647-84ed628f77f0-kube-api-access-rdfng\") pod \"metallb-operator-webhook-server-796ff6997-qwkk4\" (UID: \"af60cb8b-34de-4af2-a647-84ed628f77f0\") " pod="metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.856696 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/af60cb8b-34de-4af2-a647-84ed628f77f0-webhook-cert\") pod \"metallb-operator-webhook-server-796ff6997-qwkk4\" (UID: \"af60cb8b-34de-4af2-a647-84ed628f77f0\") " pod="metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.856764 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/af60cb8b-34de-4af2-a647-84ed628f77f0-apiservice-cert\") pod \"metallb-operator-webhook-server-796ff6997-qwkk4\" (UID: \"af60cb8b-34de-4af2-a647-84ed628f77f0\") " pod="metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.957748 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdfng\" (UniqueName: \"kubernetes.io/projected/af60cb8b-34de-4af2-a647-84ed628f77f0-kube-api-access-rdfng\") pod \"metallb-operator-webhook-server-796ff6997-qwkk4\" (UID: \"af60cb8b-34de-4af2-a647-84ed628f77f0\") " pod="metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.957835 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/af60cb8b-34de-4af2-a647-84ed628f77f0-webhook-cert\") pod \"metallb-operator-webhook-server-796ff6997-qwkk4\" (UID: \"af60cb8b-34de-4af2-a647-84ed628f77f0\") " pod="metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.957916 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/af60cb8b-34de-4af2-a647-84ed628f77f0-apiservice-cert\") pod \"metallb-operator-webhook-server-796ff6997-qwkk4\" (UID: \"af60cb8b-34de-4af2-a647-84ed628f77f0\") " pod="metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.963765 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/af60cb8b-34de-4af2-a647-84ed628f77f0-webhook-cert\") pod \"metallb-operator-webhook-server-796ff6997-qwkk4\" (UID: \"af60cb8b-34de-4af2-a647-84ed628f77f0\") " pod="metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.966540 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/af60cb8b-34de-4af2-a647-84ed628f77f0-apiservice-cert\") pod \"metallb-operator-webhook-server-796ff6997-qwkk4\" (UID: \"af60cb8b-34de-4af2-a647-84ed628f77f0\") " pod="metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4" Nov 25 15:07:37 crc kubenswrapper[4731]: I1125 15:07:37.978226 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdfng\" (UniqueName: \"kubernetes.io/projected/af60cb8b-34de-4af2-a647-84ed628f77f0-kube-api-access-rdfng\") pod \"metallb-operator-webhook-server-796ff6997-qwkk4\" (UID: \"af60cb8b-34de-4af2-a647-84ed628f77f0\") " pod="metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4" Nov 25 15:07:38 crc kubenswrapper[4731]: I1125 15:07:38.094678 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm"] Nov 25 15:07:38 crc kubenswrapper[4731]: I1125 15:07:38.165858 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4" Nov 25 15:07:38 crc kubenswrapper[4731]: I1125 15:07:38.186090 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm" event={"ID":"b6957d61-fdfc-49b1-8721-a14991fc51c5","Type":"ContainerStarted","Data":"5d3cc565f74f93c91d2e54d1f2a1b67e32256c9936e49ab93ddcfb4cfc2e2a66"} Nov 25 15:07:38 crc kubenswrapper[4731]: I1125 15:07:38.410223 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4"] Nov 25 15:07:39 crc kubenswrapper[4731]: I1125 15:07:39.198419 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4" event={"ID":"af60cb8b-34de-4af2-a647-84ed628f77f0","Type":"ContainerStarted","Data":"65c252a0611213c8631fb39161ecf1b1068146bfe577fcab0fb89ccda8ee9416"} Nov 25 15:07:42 crc kubenswrapper[4731]: I1125 15:07:42.220750 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm" event={"ID":"b6957d61-fdfc-49b1-8721-a14991fc51c5","Type":"ContainerStarted","Data":"e25725626462fd8918af918e1b998c64780e59f3dba86d387cf72a3079b94bb7"} Nov 25 15:07:42 crc kubenswrapper[4731]: I1125 15:07:42.221363 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm" Nov 25 15:07:42 crc kubenswrapper[4731]: I1125 15:07:42.248560 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm" podStartSLOduration=2.039992201 podStartE2EDuration="5.248530806s" podCreationTimestamp="2025-11-25 15:07:37 +0000 UTC" firstStartedPulling="2025-11-25 15:07:38.109291285 +0000 UTC m=+816.856425715" lastFinishedPulling="2025-11-25 15:07:41.31782989 +0000 UTC m=+820.064964320" observedRunningTime="2025-11-25 15:07:42.248251049 +0000 UTC m=+820.995385509" watchObservedRunningTime="2025-11-25 15:07:42.248530806 +0000 UTC m=+820.995665266" Nov 25 15:07:45 crc kubenswrapper[4731]: I1125 15:07:45.246280 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4" event={"ID":"af60cb8b-34de-4af2-a647-84ed628f77f0","Type":"ContainerStarted","Data":"9f199422a57de41fb9c9f002513788b8b51cca403065bed5810df88ba712e440"} Nov 25 15:07:45 crc kubenswrapper[4731]: I1125 15:07:45.247268 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4" Nov 25 15:07:45 crc kubenswrapper[4731]: I1125 15:07:45.284014 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4" podStartSLOduration=2.192140486 podStartE2EDuration="8.283992197s" podCreationTimestamp="2025-11-25 15:07:37 +0000 UTC" firstStartedPulling="2025-11-25 15:07:38.421685419 +0000 UTC m=+817.168819839" lastFinishedPulling="2025-11-25 15:07:44.51353712 +0000 UTC m=+823.260671550" observedRunningTime="2025-11-25 15:07:45.281810721 +0000 UTC m=+824.028945171" watchObservedRunningTime="2025-11-25 15:07:45.283992197 +0000 UTC m=+824.031126627" Nov 25 15:07:50 crc kubenswrapper[4731]: I1125 15:07:50.040138 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:07:50 crc kubenswrapper[4731]: I1125 15:07:50.041653 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:07:50 crc kubenswrapper[4731]: I1125 15:07:50.041837 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 15:07:50 crc kubenswrapper[4731]: I1125 15:07:50.042620 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0455c99facc14bf1fc68ee4a3d36481680ea8d93970fec20b06e0683abb73b7e"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:07:50 crc kubenswrapper[4731]: I1125 15:07:50.042793 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://0455c99facc14bf1fc68ee4a3d36481680ea8d93970fec20b06e0683abb73b7e" gracePeriod=600 Nov 25 15:07:50 crc kubenswrapper[4731]: I1125 15:07:50.286060 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="0455c99facc14bf1fc68ee4a3d36481680ea8d93970fec20b06e0683abb73b7e" exitCode=0 Nov 25 15:07:50 crc kubenswrapper[4731]: I1125 15:07:50.286173 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"0455c99facc14bf1fc68ee4a3d36481680ea8d93970fec20b06e0683abb73b7e"} Nov 25 15:07:50 crc kubenswrapper[4731]: I1125 15:07:50.286475 4731 scope.go:117] "RemoveContainer" containerID="243b2f2b733057d38d0d110d07ed2b0ae2ae87e1f75ff6ea45c0133b6a69171d" Nov 25 15:07:51 crc kubenswrapper[4731]: I1125 15:07:51.295514 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"0c9a20a237355e5d71b9ca12829a2374c42e2593ae50b43c8d42515f2f362b09"} Nov 25 15:07:58 crc kubenswrapper[4731]: I1125 15:07:58.170568 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-796ff6997-qwkk4" Nov 25 15:08:17 crc kubenswrapper[4731]: I1125 15:08:17.799359 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6bc94d877-jvccm" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.605311 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-2r6xw"] Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.609276 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.611279 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-qxfjt"] Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.612324 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-qxfjt" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.616463 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.616466 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.616542 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.618921 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-jc4qb" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.622552 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-qxfjt"] Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.690251 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7d337b1a-530a-4c6d-8eec-acd385c4dd65-frr-sockets\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.690343 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d337b1a-530a-4c6d-8eec-acd385c4dd65-metrics-certs\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.690396 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg6j4\" (UniqueName: \"kubernetes.io/projected/7d337b1a-530a-4c6d-8eec-acd385c4dd65-kube-api-access-pg6j4\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.690427 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5bwn\" (UniqueName: \"kubernetes.io/projected/fe90ff60-e166-4e73-a995-dfd82a45895c-kube-api-access-q5bwn\") pod \"frr-k8s-webhook-server-6998585d5-qxfjt\" (UID: \"fe90ff60-e166-4e73-a995-dfd82a45895c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-qxfjt" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.690464 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7d337b1a-530a-4c6d-8eec-acd385c4dd65-reloader\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.690505 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7d337b1a-530a-4c6d-8eec-acd385c4dd65-frr-conf\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.690636 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe90ff60-e166-4e73-a995-dfd82a45895c-cert\") pod \"frr-k8s-webhook-server-6998585d5-qxfjt\" (UID: \"fe90ff60-e166-4e73-a995-dfd82a45895c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-qxfjt" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.690687 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7d337b1a-530a-4c6d-8eec-acd385c4dd65-metrics\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.690713 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7d337b1a-530a-4c6d-8eec-acd385c4dd65-frr-startup\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.695339 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-2jnfr"] Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.696693 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2jnfr" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.701715 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.702020 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.702213 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.702553 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-5565w" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.724744 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6c7b4b5f48-5kfg7"] Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.725899 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-5kfg7" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.728166 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.758496 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-5kfg7"] Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.793131 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5df2adbb-3909-4e10-8f11-5953fb961766-metrics-certs\") pod \"speaker-2jnfr\" (UID: \"5df2adbb-3909-4e10-8f11-5953fb961766\") " pod="metallb-system/speaker-2jnfr" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.793203 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7d337b1a-530a-4c6d-8eec-acd385c4dd65-frr-sockets\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.793250 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5df2adbb-3909-4e10-8f11-5953fb961766-metallb-excludel2\") pod \"speaker-2jnfr\" (UID: \"5df2adbb-3909-4e10-8f11-5953fb961766\") " pod="metallb-system/speaker-2jnfr" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.793277 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d337b1a-530a-4c6d-8eec-acd385c4dd65-metrics-certs\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.793304 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5df2adbb-3909-4e10-8f11-5953fb961766-memberlist\") pod \"speaker-2jnfr\" (UID: \"5df2adbb-3909-4e10-8f11-5953fb961766\") " pod="metallb-system/speaker-2jnfr" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.793334 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5bwn\" (UniqueName: \"kubernetes.io/projected/fe90ff60-e166-4e73-a995-dfd82a45895c-kube-api-access-q5bwn\") pod \"frr-k8s-webhook-server-6998585d5-qxfjt\" (UID: \"fe90ff60-e166-4e73-a995-dfd82a45895c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-qxfjt" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.793360 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg6j4\" (UniqueName: \"kubernetes.io/projected/7d337b1a-530a-4c6d-8eec-acd385c4dd65-kube-api-access-pg6j4\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.793380 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmqc2\" (UniqueName: \"kubernetes.io/projected/5df2adbb-3909-4e10-8f11-5953fb961766-kube-api-access-dmqc2\") pod \"speaker-2jnfr\" (UID: \"5df2adbb-3909-4e10-8f11-5953fb961766\") " pod="metallb-system/speaker-2jnfr" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.793404 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7d337b1a-530a-4c6d-8eec-acd385c4dd65-reloader\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.793434 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7d337b1a-530a-4c6d-8eec-acd385c4dd65-frr-conf\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.793486 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe90ff60-e166-4e73-a995-dfd82a45895c-cert\") pod \"frr-k8s-webhook-server-6998585d5-qxfjt\" (UID: \"fe90ff60-e166-4e73-a995-dfd82a45895c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-qxfjt" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.793517 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7d337b1a-530a-4c6d-8eec-acd385c4dd65-metrics\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.793542 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7d337b1a-530a-4c6d-8eec-acd385c4dd65-frr-startup\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.794987 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7d337b1a-530a-4c6d-8eec-acd385c4dd65-frr-startup\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.795319 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7d337b1a-530a-4c6d-8eec-acd385c4dd65-frr-sockets\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.796707 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7d337b1a-530a-4c6d-8eec-acd385c4dd65-frr-conf\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.796960 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7d337b1a-530a-4c6d-8eec-acd385c4dd65-reloader\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.797513 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7d337b1a-530a-4c6d-8eec-acd385c4dd65-metrics\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.847094 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fe90ff60-e166-4e73-a995-dfd82a45895c-cert\") pod \"frr-k8s-webhook-server-6998585d5-qxfjt\" (UID: \"fe90ff60-e166-4e73-a995-dfd82a45895c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-qxfjt" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.868740 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d337b1a-530a-4c6d-8eec-acd385c4dd65-metrics-certs\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.877510 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5bwn\" (UniqueName: \"kubernetes.io/projected/fe90ff60-e166-4e73-a995-dfd82a45895c-kube-api-access-q5bwn\") pod \"frr-k8s-webhook-server-6998585d5-qxfjt\" (UID: \"fe90ff60-e166-4e73-a995-dfd82a45895c\") " pod="metallb-system/frr-k8s-webhook-server-6998585d5-qxfjt" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.914739 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpldk\" (UniqueName: \"kubernetes.io/projected/febb59e9-b008-465d-890a-6a6a342cf56b-kube-api-access-lpldk\") pod \"controller-6c7b4b5f48-5kfg7\" (UID: \"febb59e9-b008-465d-890a-6a6a342cf56b\") " pod="metallb-system/controller-6c7b4b5f48-5kfg7" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.922083 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5df2adbb-3909-4e10-8f11-5953fb961766-metrics-certs\") pod \"speaker-2jnfr\" (UID: \"5df2adbb-3909-4e10-8f11-5953fb961766\") " pod="metallb-system/speaker-2jnfr" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.922202 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5df2adbb-3909-4e10-8f11-5953fb961766-metallb-excludel2\") pod \"speaker-2jnfr\" (UID: \"5df2adbb-3909-4e10-8f11-5953fb961766\") " pod="metallb-system/speaker-2jnfr" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.922280 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/febb59e9-b008-465d-890a-6a6a342cf56b-cert\") pod \"controller-6c7b4b5f48-5kfg7\" (UID: \"febb59e9-b008-465d-890a-6a6a342cf56b\") " pod="metallb-system/controller-6c7b4b5f48-5kfg7" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.922334 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/febb59e9-b008-465d-890a-6a6a342cf56b-metrics-certs\") pod \"controller-6c7b4b5f48-5kfg7\" (UID: \"febb59e9-b008-465d-890a-6a6a342cf56b\") " pod="metallb-system/controller-6c7b4b5f48-5kfg7" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.922380 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5df2adbb-3909-4e10-8f11-5953fb961766-memberlist\") pod \"speaker-2jnfr\" (UID: \"5df2adbb-3909-4e10-8f11-5953fb961766\") " pod="metallb-system/speaker-2jnfr" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.922439 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmqc2\" (UniqueName: \"kubernetes.io/projected/5df2adbb-3909-4e10-8f11-5953fb961766-kube-api-access-dmqc2\") pod \"speaker-2jnfr\" (UID: \"5df2adbb-3909-4e10-8f11-5953fb961766\") " pod="metallb-system/speaker-2jnfr" Nov 25 15:08:18 crc kubenswrapper[4731]: E1125 15:08:18.922627 4731 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Nov 25 15:08:18 crc kubenswrapper[4731]: E1125 15:08:18.922790 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5df2adbb-3909-4e10-8f11-5953fb961766-metrics-certs podName:5df2adbb-3909-4e10-8f11-5953fb961766 nodeName:}" failed. No retries permitted until 2025-11-25 15:08:19.422763183 +0000 UTC m=+858.169897613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5df2adbb-3909-4e10-8f11-5953fb961766-metrics-certs") pod "speaker-2jnfr" (UID: "5df2adbb-3909-4e10-8f11-5953fb961766") : secret "speaker-certs-secret" not found Nov 25 15:08:18 crc kubenswrapper[4731]: E1125 15:08:18.923960 4731 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 15:08:18 crc kubenswrapper[4731]: E1125 15:08:18.924036 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5df2adbb-3909-4e10-8f11-5953fb961766-memberlist podName:5df2adbb-3909-4e10-8f11-5953fb961766 nodeName:}" failed. No retries permitted until 2025-11-25 15:08:19.424014134 +0000 UTC m=+858.171148564 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5df2adbb-3909-4e10-8f11-5953fb961766-memberlist") pod "speaker-2jnfr" (UID: "5df2adbb-3909-4e10-8f11-5953fb961766") : secret "metallb-memberlist" not found Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.924122 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/5df2adbb-3909-4e10-8f11-5953fb961766-metallb-excludel2\") pod \"speaker-2jnfr\" (UID: \"5df2adbb-3909-4e10-8f11-5953fb961766\") " pod="metallb-system/speaker-2jnfr" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.931646 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg6j4\" (UniqueName: \"kubernetes.io/projected/7d337b1a-530a-4c6d-8eec-acd385c4dd65-kube-api-access-pg6j4\") pod \"frr-k8s-2r6xw\" (UID: \"7d337b1a-530a-4c6d-8eec-acd385c4dd65\") " pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.962375 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmqc2\" (UniqueName: \"kubernetes.io/projected/5df2adbb-3909-4e10-8f11-5953fb961766-kube-api-access-dmqc2\") pod \"speaker-2jnfr\" (UID: \"5df2adbb-3909-4e10-8f11-5953fb961766\") " pod="metallb-system/speaker-2jnfr" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.979306 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:18 crc kubenswrapper[4731]: I1125 15:08:18.993838 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-6998585d5-qxfjt" Nov 25 15:08:19 crc kubenswrapper[4731]: I1125 15:08:19.024450 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/febb59e9-b008-465d-890a-6a6a342cf56b-cert\") pod \"controller-6c7b4b5f48-5kfg7\" (UID: \"febb59e9-b008-465d-890a-6a6a342cf56b\") " pod="metallb-system/controller-6c7b4b5f48-5kfg7" Nov 25 15:08:19 crc kubenswrapper[4731]: I1125 15:08:19.024670 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/febb59e9-b008-465d-890a-6a6a342cf56b-metrics-certs\") pod \"controller-6c7b4b5f48-5kfg7\" (UID: \"febb59e9-b008-465d-890a-6a6a342cf56b\") " pod="metallb-system/controller-6c7b4b5f48-5kfg7" Nov 25 15:08:19 crc kubenswrapper[4731]: I1125 15:08:19.024806 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpldk\" (UniqueName: \"kubernetes.io/projected/febb59e9-b008-465d-890a-6a6a342cf56b-kube-api-access-lpldk\") pod \"controller-6c7b4b5f48-5kfg7\" (UID: \"febb59e9-b008-465d-890a-6a6a342cf56b\") " pod="metallb-system/controller-6c7b4b5f48-5kfg7" Nov 25 15:08:19 crc kubenswrapper[4731]: E1125 15:08:19.024950 4731 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Nov 25 15:08:19 crc kubenswrapper[4731]: E1125 15:08:19.025082 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/febb59e9-b008-465d-890a-6a6a342cf56b-metrics-certs podName:febb59e9-b008-465d-890a-6a6a342cf56b nodeName:}" failed. No retries permitted until 2025-11-25 15:08:19.525046473 +0000 UTC m=+858.272181083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/febb59e9-b008-465d-890a-6a6a342cf56b-metrics-certs") pod "controller-6c7b4b5f48-5kfg7" (UID: "febb59e9-b008-465d-890a-6a6a342cf56b") : secret "controller-certs-secret" not found Nov 25 15:08:19 crc kubenswrapper[4731]: I1125 15:08:19.028002 4731 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Nov 25 15:08:19 crc kubenswrapper[4731]: I1125 15:08:19.040416 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/febb59e9-b008-465d-890a-6a6a342cf56b-cert\") pod \"controller-6c7b4b5f48-5kfg7\" (UID: \"febb59e9-b008-465d-890a-6a6a342cf56b\") " pod="metallb-system/controller-6c7b4b5f48-5kfg7" Nov 25 15:08:19 crc kubenswrapper[4731]: I1125 15:08:19.051619 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpldk\" (UniqueName: \"kubernetes.io/projected/febb59e9-b008-465d-890a-6a6a342cf56b-kube-api-access-lpldk\") pod \"controller-6c7b4b5f48-5kfg7\" (UID: \"febb59e9-b008-465d-890a-6a6a342cf56b\") " pod="metallb-system/controller-6c7b4b5f48-5kfg7" Nov 25 15:08:19 crc kubenswrapper[4731]: I1125 15:08:19.233167 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-6998585d5-qxfjt"] Nov 25 15:08:19 crc kubenswrapper[4731]: I1125 15:08:19.431073 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5df2adbb-3909-4e10-8f11-5953fb961766-metrics-certs\") pod \"speaker-2jnfr\" (UID: \"5df2adbb-3909-4e10-8f11-5953fb961766\") " pod="metallb-system/speaker-2jnfr" Nov 25 15:08:19 crc kubenswrapper[4731]: I1125 15:08:19.431147 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5df2adbb-3909-4e10-8f11-5953fb961766-memberlist\") pod \"speaker-2jnfr\" (UID: \"5df2adbb-3909-4e10-8f11-5953fb961766\") " pod="metallb-system/speaker-2jnfr" Nov 25 15:08:19 crc kubenswrapper[4731]: E1125 15:08:19.431304 4731 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Nov 25 15:08:19 crc kubenswrapper[4731]: E1125 15:08:19.431395 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5df2adbb-3909-4e10-8f11-5953fb961766-memberlist podName:5df2adbb-3909-4e10-8f11-5953fb961766 nodeName:}" failed. No retries permitted until 2025-11-25 15:08:20.431373781 +0000 UTC m=+859.178508211 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/5df2adbb-3909-4e10-8f11-5953fb961766-memberlist") pod "speaker-2jnfr" (UID: "5df2adbb-3909-4e10-8f11-5953fb961766") : secret "metallb-memberlist" not found Nov 25 15:08:19 crc kubenswrapper[4731]: I1125 15:08:19.435755 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5df2adbb-3909-4e10-8f11-5953fb961766-metrics-certs\") pod \"speaker-2jnfr\" (UID: \"5df2adbb-3909-4e10-8f11-5953fb961766\") " pod="metallb-system/speaker-2jnfr" Nov 25 15:08:19 crc kubenswrapper[4731]: I1125 15:08:19.491205 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2r6xw" event={"ID":"7d337b1a-530a-4c6d-8eec-acd385c4dd65","Type":"ContainerStarted","Data":"12fbe638b88e3c7b77178c398fd56f44d604755db5c7694745ec5d29e372a310"} Nov 25 15:08:19 crc kubenswrapper[4731]: I1125 15:08:19.492709 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-qxfjt" event={"ID":"fe90ff60-e166-4e73-a995-dfd82a45895c","Type":"ContainerStarted","Data":"5f4918d1cf6f925caaa6303551358a6fc7c23803c8d8034ba37ff854359aaa12"} Nov 25 15:08:19 crc kubenswrapper[4731]: I1125 15:08:19.533030 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/febb59e9-b008-465d-890a-6a6a342cf56b-metrics-certs\") pod \"controller-6c7b4b5f48-5kfg7\" (UID: \"febb59e9-b008-465d-890a-6a6a342cf56b\") " pod="metallb-system/controller-6c7b4b5f48-5kfg7" Nov 25 15:08:19 crc kubenswrapper[4731]: I1125 15:08:19.537116 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/febb59e9-b008-465d-890a-6a6a342cf56b-metrics-certs\") pod \"controller-6c7b4b5f48-5kfg7\" (UID: \"febb59e9-b008-465d-890a-6a6a342cf56b\") " pod="metallb-system/controller-6c7b4b5f48-5kfg7" Nov 25 15:08:19 crc kubenswrapper[4731]: I1125 15:08:19.640535 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6c7b4b5f48-5kfg7" Nov 25 15:08:19 crc kubenswrapper[4731]: I1125 15:08:19.919144 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6c7b4b5f48-5kfg7"] Nov 25 15:08:20 crc kubenswrapper[4731]: I1125 15:08:20.448227 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5df2adbb-3909-4e10-8f11-5953fb961766-memberlist\") pod \"speaker-2jnfr\" (UID: \"5df2adbb-3909-4e10-8f11-5953fb961766\") " pod="metallb-system/speaker-2jnfr" Nov 25 15:08:20 crc kubenswrapper[4731]: I1125 15:08:20.456699 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/5df2adbb-3909-4e10-8f11-5953fb961766-memberlist\") pod \"speaker-2jnfr\" (UID: \"5df2adbb-3909-4e10-8f11-5953fb961766\") " pod="metallb-system/speaker-2jnfr" Nov 25 15:08:20 crc kubenswrapper[4731]: I1125 15:08:20.502162 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-5kfg7" event={"ID":"febb59e9-b008-465d-890a-6a6a342cf56b","Type":"ContainerStarted","Data":"ca3094ff93fccb88dbd35c1f440e029a46467480c244dcc4486cafd5b065d8de"} Nov 25 15:08:20 crc kubenswrapper[4731]: I1125 15:08:20.502214 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-5kfg7" event={"ID":"febb59e9-b008-465d-890a-6a6a342cf56b","Type":"ContainerStarted","Data":"045c18a5821566514010a4ab90d9b4c06b9353720aa28024978ec86fbf560bf0"} Nov 25 15:08:20 crc kubenswrapper[4731]: I1125 15:08:20.502224 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6c7b4b5f48-5kfg7" event={"ID":"febb59e9-b008-465d-890a-6a6a342cf56b","Type":"ContainerStarted","Data":"eec4b9d325d9a332e281b15d092cf6629645488b0c643a28a0057bf1eb5d8792"} Nov 25 15:08:20 crc kubenswrapper[4731]: I1125 15:08:20.502658 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6c7b4b5f48-5kfg7" Nov 25 15:08:20 crc kubenswrapper[4731]: I1125 15:08:20.516686 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-2jnfr" Nov 25 15:08:20 crc kubenswrapper[4731]: I1125 15:08:20.533540 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6c7b4b5f48-5kfg7" podStartSLOduration=2.5335120140000003 podStartE2EDuration="2.533512014s" podCreationTimestamp="2025-11-25 15:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:08:20.532877748 +0000 UTC m=+859.280012198" watchObservedRunningTime="2025-11-25 15:08:20.533512014 +0000 UTC m=+859.280646444" Nov 25 15:08:20 crc kubenswrapper[4731]: W1125 15:08:20.561213 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5df2adbb_3909_4e10_8f11_5953fb961766.slice/crio-eeae4b51cae37c17434f66b3bead1ab9035d6d5ae5d00cc552d6a7de3138c53c WatchSource:0}: Error finding container eeae4b51cae37c17434f66b3bead1ab9035d6d5ae5d00cc552d6a7de3138c53c: Status 404 returned error can't find the container with id eeae4b51cae37c17434f66b3bead1ab9035d6d5ae5d00cc552d6a7de3138c53c Nov 25 15:08:21 crc kubenswrapper[4731]: I1125 15:08:21.512079 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2jnfr" event={"ID":"5df2adbb-3909-4e10-8f11-5953fb961766","Type":"ContainerStarted","Data":"799a4d2b647eb0b413cb4092dd8304665f7ac7ce9b128529d8dc8bf1e05ea20d"} Nov 25 15:08:21 crc kubenswrapper[4731]: I1125 15:08:21.512479 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2jnfr" event={"ID":"5df2adbb-3909-4e10-8f11-5953fb961766","Type":"ContainerStarted","Data":"86b674a214ce3eba053e2eab017cc2c10b8b6a5e1796dd81ff855785a15449a9"} Nov 25 15:08:21 crc kubenswrapper[4731]: I1125 15:08:21.512499 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-2jnfr" event={"ID":"5df2adbb-3909-4e10-8f11-5953fb961766","Type":"ContainerStarted","Data":"eeae4b51cae37c17434f66b3bead1ab9035d6d5ae5d00cc552d6a7de3138c53c"} Nov 25 15:08:21 crc kubenswrapper[4731]: I1125 15:08:21.513011 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-2jnfr" Nov 25 15:08:21 crc kubenswrapper[4731]: I1125 15:08:21.534392 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-2jnfr" podStartSLOduration=3.534376731 podStartE2EDuration="3.534376731s" podCreationTimestamp="2025-11-25 15:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:08:21.533249964 +0000 UTC m=+860.280384404" watchObservedRunningTime="2025-11-25 15:08:21.534376731 +0000 UTC m=+860.281511161" Nov 25 15:08:27 crc kubenswrapper[4731]: I1125 15:08:27.565250 4731 generic.go:334] "Generic (PLEG): container finished" podID="7d337b1a-530a-4c6d-8eec-acd385c4dd65" containerID="dadd8bf07492505cae2940a8b8617728cf180614c5245524dd882f970f04a4e8" exitCode=0 Nov 25 15:08:27 crc kubenswrapper[4731]: I1125 15:08:27.565342 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2r6xw" event={"ID":"7d337b1a-530a-4c6d-8eec-acd385c4dd65","Type":"ContainerDied","Data":"dadd8bf07492505cae2940a8b8617728cf180614c5245524dd882f970f04a4e8"} Nov 25 15:08:27 crc kubenswrapper[4731]: I1125 15:08:27.568852 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-6998585d5-qxfjt" event={"ID":"fe90ff60-e166-4e73-a995-dfd82a45895c","Type":"ContainerStarted","Data":"47a3cfd99bad402204817250327061a4fab3acd521b84d8f2d79cc3481a5a633"} Nov 25 15:08:27 crc kubenswrapper[4731]: I1125 15:08:27.569127 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-6998585d5-qxfjt" Nov 25 15:08:27 crc kubenswrapper[4731]: I1125 15:08:27.586449 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-6998585d5-qxfjt" podStartSLOduration=1.716628942 podStartE2EDuration="9.586432264s" podCreationTimestamp="2025-11-25 15:08:18 +0000 UTC" firstStartedPulling="2025-11-25 15:08:19.24133689 +0000 UTC m=+857.988471320" lastFinishedPulling="2025-11-25 15:08:27.111140152 +0000 UTC m=+865.858274642" observedRunningTime="2025-11-25 15:08:27.583328358 +0000 UTC m=+866.330462808" watchObservedRunningTime="2025-11-25 15:08:27.586432264 +0000 UTC m=+866.333566694" Nov 25 15:08:29 crc kubenswrapper[4731]: I1125 15:08:29.582267 4731 generic.go:334] "Generic (PLEG): container finished" podID="7d337b1a-530a-4c6d-8eec-acd385c4dd65" containerID="77d291791a7d2c2037c9a977485808c2fb01acf2d3ba37eb00089a1748eca1df" exitCode=0 Nov 25 15:08:29 crc kubenswrapper[4731]: I1125 15:08:29.582353 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2r6xw" event={"ID":"7d337b1a-530a-4c6d-8eec-acd385c4dd65","Type":"ContainerDied","Data":"77d291791a7d2c2037c9a977485808c2fb01acf2d3ba37eb00089a1748eca1df"} Nov 25 15:08:30 crc kubenswrapper[4731]: I1125 15:08:30.524950 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-2jnfr" Nov 25 15:08:30 crc kubenswrapper[4731]: I1125 15:08:30.592545 4731 generic.go:334] "Generic (PLEG): container finished" podID="7d337b1a-530a-4c6d-8eec-acd385c4dd65" containerID="b0b09c6431bc3f17cfd3f5c4ed4ba959f03ce0f6535daee996db21c8fb6df09f" exitCode=0 Nov 25 15:08:30 crc kubenswrapper[4731]: I1125 15:08:30.592609 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2r6xw" event={"ID":"7d337b1a-530a-4c6d-8eec-acd385c4dd65","Type":"ContainerDied","Data":"b0b09c6431bc3f17cfd3f5c4ed4ba959f03ce0f6535daee996db21c8fb6df09f"} Nov 25 15:08:31 crc kubenswrapper[4731]: I1125 15:08:31.611795 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2r6xw" event={"ID":"7d337b1a-530a-4c6d-8eec-acd385c4dd65","Type":"ContainerStarted","Data":"7e0145866ed6d0d68961e17c1c32fb7bb5bd3d0358fe5978e742149198e608af"} Nov 25 15:08:31 crc kubenswrapper[4731]: I1125 15:08:31.612249 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2r6xw" event={"ID":"7d337b1a-530a-4c6d-8eec-acd385c4dd65","Type":"ContainerStarted","Data":"c244de55d89eea749ea5aea62d846f2c48a95c61aef3caeadb8d0d68118dca7c"} Nov 25 15:08:31 crc kubenswrapper[4731]: I1125 15:08:31.612263 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2r6xw" event={"ID":"7d337b1a-530a-4c6d-8eec-acd385c4dd65","Type":"ContainerStarted","Data":"1cd3320587fe227b45d2f01d006cfa626469b10de721c8ac396b1db4a3f14251"} Nov 25 15:08:31 crc kubenswrapper[4731]: I1125 15:08:31.612275 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2r6xw" event={"ID":"7d337b1a-530a-4c6d-8eec-acd385c4dd65","Type":"ContainerStarted","Data":"0fcb3129477a18e3609e71420fe266f9a7d4d22ed778e459123cca8c1fe126ab"} Nov 25 15:08:31 crc kubenswrapper[4731]: I1125 15:08:31.612291 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2r6xw" event={"ID":"7d337b1a-530a-4c6d-8eec-acd385c4dd65","Type":"ContainerStarted","Data":"685f40902b0b46d7ddf37fa495887ad957d3b2785f47dbc89e5119a812858098"} Nov 25 15:08:32 crc kubenswrapper[4731]: I1125 15:08:32.625183 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-2r6xw" event={"ID":"7d337b1a-530a-4c6d-8eec-acd385c4dd65","Type":"ContainerStarted","Data":"a2a8e7676cb46255b21c1fdc3324f66b61b04ac8dae7f395c320e2b83189785b"} Nov 25 15:08:32 crc kubenswrapper[4731]: I1125 15:08:32.625397 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:32 crc kubenswrapper[4731]: I1125 15:08:32.658434 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-2r6xw" podStartSLOduration=6.713795713 podStartE2EDuration="14.658417411s" podCreationTimestamp="2025-11-25 15:08:18 +0000 UTC" firstStartedPulling="2025-11-25 15:08:19.188524504 +0000 UTC m=+857.935658934" lastFinishedPulling="2025-11-25 15:08:27.133146182 +0000 UTC m=+865.880280632" observedRunningTime="2025-11-25 15:08:32.656810101 +0000 UTC m=+871.403944531" watchObservedRunningTime="2025-11-25 15:08:32.658417411 +0000 UTC m=+871.405551841" Nov 25 15:08:33 crc kubenswrapper[4731]: I1125 15:08:33.897415 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zd46j"] Nov 25 15:08:33 crc kubenswrapper[4731]: I1125 15:08:33.901151 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zd46j" Nov 25 15:08:33 crc kubenswrapper[4731]: I1125 15:08:33.904803 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Nov 25 15:08:33 crc kubenswrapper[4731]: I1125 15:08:33.905049 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zd46j"] Nov 25 15:08:33 crc kubenswrapper[4731]: I1125 15:08:33.905164 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-xvzvr" Nov 25 15:08:33 crc kubenswrapper[4731]: I1125 15:08:33.911057 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Nov 25 15:08:33 crc kubenswrapper[4731]: I1125 15:08:33.980036 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:34 crc kubenswrapper[4731]: I1125 15:08:34.040436 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:34 crc kubenswrapper[4731]: I1125 15:08:34.060297 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l72dm\" (UniqueName: \"kubernetes.io/projected/dedf20e0-6970-489a-9363-cdf2d9956342-kube-api-access-l72dm\") pod \"openstack-operator-index-zd46j\" (UID: \"dedf20e0-6970-489a-9363-cdf2d9956342\") " pod="openstack-operators/openstack-operator-index-zd46j" Nov 25 15:08:34 crc kubenswrapper[4731]: I1125 15:08:34.161920 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l72dm\" (UniqueName: \"kubernetes.io/projected/dedf20e0-6970-489a-9363-cdf2d9956342-kube-api-access-l72dm\") pod \"openstack-operator-index-zd46j\" (UID: \"dedf20e0-6970-489a-9363-cdf2d9956342\") " pod="openstack-operators/openstack-operator-index-zd46j" Nov 25 15:08:34 crc kubenswrapper[4731]: I1125 15:08:34.182614 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l72dm\" (UniqueName: \"kubernetes.io/projected/dedf20e0-6970-489a-9363-cdf2d9956342-kube-api-access-l72dm\") pod \"openstack-operator-index-zd46j\" (UID: \"dedf20e0-6970-489a-9363-cdf2d9956342\") " pod="openstack-operators/openstack-operator-index-zd46j" Nov 25 15:08:34 crc kubenswrapper[4731]: I1125 15:08:34.236322 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zd46j" Nov 25 15:08:34 crc kubenswrapper[4731]: I1125 15:08:34.690904 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zd46j"] Nov 25 15:08:34 crc kubenswrapper[4731]: W1125 15:08:34.696232 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddedf20e0_6970_489a_9363_cdf2d9956342.slice/crio-15c09f66ba486fd84ccf315b2a28df0b63e12d543e0c943da84860773c9b2c72 WatchSource:0}: Error finding container 15c09f66ba486fd84ccf315b2a28df0b63e12d543e0c943da84860773c9b2c72: Status 404 returned error can't find the container with id 15c09f66ba486fd84ccf315b2a28df0b63e12d543e0c943da84860773c9b2c72 Nov 25 15:08:35 crc kubenswrapper[4731]: I1125 15:08:35.649233 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zd46j" event={"ID":"dedf20e0-6970-489a-9363-cdf2d9956342","Type":"ContainerStarted","Data":"15c09f66ba486fd84ccf315b2a28df0b63e12d543e0c943da84860773c9b2c72"} Nov 25 15:08:37 crc kubenswrapper[4731]: I1125 15:08:37.065736 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zd46j"] Nov 25 15:08:37 crc kubenswrapper[4731]: I1125 15:08:37.665606 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zd46j" event={"ID":"dedf20e0-6970-489a-9363-cdf2d9956342","Type":"ContainerStarted","Data":"c916b93c7de408f5a299dbc91d716f0ff90339009ee180bf0d8a64bc1ec82649"} Nov 25 15:08:37 crc kubenswrapper[4731]: I1125 15:08:37.665843 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-zd46j" podUID="dedf20e0-6970-489a-9363-cdf2d9956342" containerName="registry-server" containerID="cri-o://c916b93c7de408f5a299dbc91d716f0ff90339009ee180bf0d8a64bc1ec82649" gracePeriod=2 Nov 25 15:08:37 crc kubenswrapper[4731]: I1125 15:08:37.672598 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-6k7ns"] Nov 25 15:08:37 crc kubenswrapper[4731]: I1125 15:08:37.673598 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6k7ns" Nov 25 15:08:37 crc kubenswrapper[4731]: I1125 15:08:37.687746 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6k7ns"] Nov 25 15:08:37 crc kubenswrapper[4731]: I1125 15:08:37.718474 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zd46j" podStartSLOduration=1.929193396 podStartE2EDuration="4.718453523s" podCreationTimestamp="2025-11-25 15:08:33 +0000 UTC" firstStartedPulling="2025-11-25 15:08:34.699237814 +0000 UTC m=+873.446372244" lastFinishedPulling="2025-11-25 15:08:37.488497941 +0000 UTC m=+876.235632371" observedRunningTime="2025-11-25 15:08:37.688112519 +0000 UTC m=+876.435246949" watchObservedRunningTime="2025-11-25 15:08:37.718453523 +0000 UTC m=+876.465587953" Nov 25 15:08:37 crc kubenswrapper[4731]: I1125 15:08:37.838643 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsg4q\" (UniqueName: \"kubernetes.io/projected/f573d017-983d-413d-adea-34617a278460-kube-api-access-tsg4q\") pod \"openstack-operator-index-6k7ns\" (UID: \"f573d017-983d-413d-adea-34617a278460\") " pod="openstack-operators/openstack-operator-index-6k7ns" Nov 25 15:08:37 crc kubenswrapper[4731]: I1125 15:08:37.940220 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsg4q\" (UniqueName: \"kubernetes.io/projected/f573d017-983d-413d-adea-34617a278460-kube-api-access-tsg4q\") pod \"openstack-operator-index-6k7ns\" (UID: \"f573d017-983d-413d-adea-34617a278460\") " pod="openstack-operators/openstack-operator-index-6k7ns" Nov 25 15:08:37 crc kubenswrapper[4731]: I1125 15:08:37.966430 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsg4q\" (UniqueName: \"kubernetes.io/projected/f573d017-983d-413d-adea-34617a278460-kube-api-access-tsg4q\") pod \"openstack-operator-index-6k7ns\" (UID: \"f573d017-983d-413d-adea-34617a278460\") " pod="openstack-operators/openstack-operator-index-6k7ns" Nov 25 15:08:37 crc kubenswrapper[4731]: I1125 15:08:37.997310 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-6k7ns" Nov 25 15:08:38 crc kubenswrapper[4731]: I1125 15:08:38.072213 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zd46j" Nov 25 15:08:38 crc kubenswrapper[4731]: I1125 15:08:38.246993 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l72dm\" (UniqueName: \"kubernetes.io/projected/dedf20e0-6970-489a-9363-cdf2d9956342-kube-api-access-l72dm\") pod \"dedf20e0-6970-489a-9363-cdf2d9956342\" (UID: \"dedf20e0-6970-489a-9363-cdf2d9956342\") " Nov 25 15:08:38 crc kubenswrapper[4731]: I1125 15:08:38.254496 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dedf20e0-6970-489a-9363-cdf2d9956342-kube-api-access-l72dm" (OuterVolumeSpecName: "kube-api-access-l72dm") pod "dedf20e0-6970-489a-9363-cdf2d9956342" (UID: "dedf20e0-6970-489a-9363-cdf2d9956342"). InnerVolumeSpecName "kube-api-access-l72dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:08:38 crc kubenswrapper[4731]: I1125 15:08:38.348514 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l72dm\" (UniqueName: \"kubernetes.io/projected/dedf20e0-6970-489a-9363-cdf2d9956342-kube-api-access-l72dm\") on node \"crc\" DevicePath \"\"" Nov 25 15:08:38 crc kubenswrapper[4731]: I1125 15:08:38.495876 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-6k7ns"] Nov 25 15:08:38 crc kubenswrapper[4731]: I1125 15:08:38.675895 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6k7ns" event={"ID":"f573d017-983d-413d-adea-34617a278460","Type":"ContainerStarted","Data":"f9445d25ef135b3359804d1c470c1e4be989589b1e3a4b62fece07228c6e203a"} Nov 25 15:08:38 crc kubenswrapper[4731]: I1125 15:08:38.678771 4731 generic.go:334] "Generic (PLEG): container finished" podID="dedf20e0-6970-489a-9363-cdf2d9956342" containerID="c916b93c7de408f5a299dbc91d716f0ff90339009ee180bf0d8a64bc1ec82649" exitCode=0 Nov 25 15:08:38 crc kubenswrapper[4731]: I1125 15:08:38.678844 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zd46j" event={"ID":"dedf20e0-6970-489a-9363-cdf2d9956342","Type":"ContainerDied","Data":"c916b93c7de408f5a299dbc91d716f0ff90339009ee180bf0d8a64bc1ec82649"} Nov 25 15:08:38 crc kubenswrapper[4731]: I1125 15:08:38.678908 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zd46j" Nov 25 15:08:38 crc kubenswrapper[4731]: I1125 15:08:38.678930 4731 scope.go:117] "RemoveContainer" containerID="c916b93c7de408f5a299dbc91d716f0ff90339009ee180bf0d8a64bc1ec82649" Nov 25 15:08:38 crc kubenswrapper[4731]: I1125 15:08:38.678911 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zd46j" event={"ID":"dedf20e0-6970-489a-9363-cdf2d9956342","Type":"ContainerDied","Data":"15c09f66ba486fd84ccf315b2a28df0b63e12d543e0c943da84860773c9b2c72"} Nov 25 15:08:38 crc kubenswrapper[4731]: I1125 15:08:38.695799 4731 scope.go:117] "RemoveContainer" containerID="c916b93c7de408f5a299dbc91d716f0ff90339009ee180bf0d8a64bc1ec82649" Nov 25 15:08:38 crc kubenswrapper[4731]: E1125 15:08:38.696474 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c916b93c7de408f5a299dbc91d716f0ff90339009ee180bf0d8a64bc1ec82649\": container with ID starting with c916b93c7de408f5a299dbc91d716f0ff90339009ee180bf0d8a64bc1ec82649 not found: ID does not exist" containerID="c916b93c7de408f5a299dbc91d716f0ff90339009ee180bf0d8a64bc1ec82649" Nov 25 15:08:38 crc kubenswrapper[4731]: I1125 15:08:38.696528 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c916b93c7de408f5a299dbc91d716f0ff90339009ee180bf0d8a64bc1ec82649"} err="failed to get container status \"c916b93c7de408f5a299dbc91d716f0ff90339009ee180bf0d8a64bc1ec82649\": rpc error: code = NotFound desc = could not find container \"c916b93c7de408f5a299dbc91d716f0ff90339009ee180bf0d8a64bc1ec82649\": container with ID starting with c916b93c7de408f5a299dbc91d716f0ff90339009ee180bf0d8a64bc1ec82649 not found: ID does not exist" Nov 25 15:08:38 crc kubenswrapper[4731]: I1125 15:08:38.729557 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-zd46j"] Nov 25 15:08:38 crc kubenswrapper[4731]: I1125 15:08:38.733295 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-zd46j"] Nov 25 15:08:38 crc kubenswrapper[4731]: I1125 15:08:38.820621 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dedf20e0-6970-489a-9363-cdf2d9956342" path="/var/lib/kubelet/pods/dedf20e0-6970-489a-9363-cdf2d9956342/volumes" Nov 25 15:08:39 crc kubenswrapper[4731]: I1125 15:08:39.006089 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-6998585d5-qxfjt" Nov 25 15:08:39 crc kubenswrapper[4731]: I1125 15:08:39.643933 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6c7b4b5f48-5kfg7" Nov 25 15:08:39 crc kubenswrapper[4731]: I1125 15:08:39.687611 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-6k7ns" event={"ID":"f573d017-983d-413d-adea-34617a278460","Type":"ContainerStarted","Data":"cc9f323eacb8257bb1e4fee1de7398f1e687ee55216df87f2bab63b8e17dbcde"} Nov 25 15:08:39 crc kubenswrapper[4731]: I1125 15:08:39.707093 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-6k7ns" podStartSLOduration=2.641177769 podStartE2EDuration="2.707064046s" podCreationTimestamp="2025-11-25 15:08:37 +0000 UTC" firstStartedPulling="2025-11-25 15:08:38.491305046 +0000 UTC m=+877.238439476" lastFinishedPulling="2025-11-25 15:08:38.557191323 +0000 UTC m=+877.304325753" observedRunningTime="2025-11-25 15:08:39.705232221 +0000 UTC m=+878.452366671" watchObservedRunningTime="2025-11-25 15:08:39.707064046 +0000 UTC m=+878.454198506" Nov 25 15:08:47 crc kubenswrapper[4731]: I1125 15:08:47.998372 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-6k7ns" Nov 25 15:08:48 crc kubenswrapper[4731]: I1125 15:08:47.999588 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-6k7ns" Nov 25 15:08:48 crc kubenswrapper[4731]: I1125 15:08:48.043678 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-6k7ns" Nov 25 15:08:48 crc kubenswrapper[4731]: I1125 15:08:48.790678 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-6k7ns" Nov 25 15:08:48 crc kubenswrapper[4731]: I1125 15:08:48.984248 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-2r6xw" Nov 25 15:08:55 crc kubenswrapper[4731]: I1125 15:08:55.562528 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85"] Nov 25 15:08:55 crc kubenswrapper[4731]: E1125 15:08:55.564052 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dedf20e0-6970-489a-9363-cdf2d9956342" containerName="registry-server" Nov 25 15:08:55 crc kubenswrapper[4731]: I1125 15:08:55.564070 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="dedf20e0-6970-489a-9363-cdf2d9956342" containerName="registry-server" Nov 25 15:08:55 crc kubenswrapper[4731]: I1125 15:08:55.564224 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="dedf20e0-6970-489a-9363-cdf2d9956342" containerName="registry-server" Nov 25 15:08:55 crc kubenswrapper[4731]: I1125 15:08:55.565252 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" Nov 25 15:08:55 crc kubenswrapper[4731]: I1125 15:08:55.573594 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wxnzl" Nov 25 15:08:55 crc kubenswrapper[4731]: I1125 15:08:55.576039 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85"] Nov 25 15:08:55 crc kubenswrapper[4731]: I1125 15:08:55.641025 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3958d450-0a9a-4191-b0e0-5989df2da551-bundle\") pod \"9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85\" (UID: \"3958d450-0a9a-4191-b0e0-5989df2da551\") " pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" Nov 25 15:08:55 crc kubenswrapper[4731]: I1125 15:08:55.641124 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3958d450-0a9a-4191-b0e0-5989df2da551-util\") pod \"9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85\" (UID: \"3958d450-0a9a-4191-b0e0-5989df2da551\") " pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" Nov 25 15:08:55 crc kubenswrapper[4731]: I1125 15:08:55.641196 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdwrp\" (UniqueName: \"kubernetes.io/projected/3958d450-0a9a-4191-b0e0-5989df2da551-kube-api-access-sdwrp\") pod \"9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85\" (UID: \"3958d450-0a9a-4191-b0e0-5989df2da551\") " pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" Nov 25 15:08:55 crc kubenswrapper[4731]: I1125 15:08:55.742529 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3958d450-0a9a-4191-b0e0-5989df2da551-bundle\") pod \"9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85\" (UID: \"3958d450-0a9a-4191-b0e0-5989df2da551\") " pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" Nov 25 15:08:55 crc kubenswrapper[4731]: I1125 15:08:55.742594 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3958d450-0a9a-4191-b0e0-5989df2da551-util\") pod \"9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85\" (UID: \"3958d450-0a9a-4191-b0e0-5989df2da551\") " pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" Nov 25 15:08:55 crc kubenswrapper[4731]: I1125 15:08:55.742648 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdwrp\" (UniqueName: \"kubernetes.io/projected/3958d450-0a9a-4191-b0e0-5989df2da551-kube-api-access-sdwrp\") pod \"9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85\" (UID: \"3958d450-0a9a-4191-b0e0-5989df2da551\") " pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" Nov 25 15:08:55 crc kubenswrapper[4731]: I1125 15:08:55.743367 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3958d450-0a9a-4191-b0e0-5989df2da551-util\") pod \"9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85\" (UID: \"3958d450-0a9a-4191-b0e0-5989df2da551\") " pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" Nov 25 15:08:55 crc kubenswrapper[4731]: I1125 15:08:55.743533 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3958d450-0a9a-4191-b0e0-5989df2da551-bundle\") pod \"9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85\" (UID: \"3958d450-0a9a-4191-b0e0-5989df2da551\") " pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" Nov 25 15:08:55 crc kubenswrapper[4731]: I1125 15:08:55.760876 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdwrp\" (UniqueName: \"kubernetes.io/projected/3958d450-0a9a-4191-b0e0-5989df2da551-kube-api-access-sdwrp\") pod \"9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85\" (UID: \"3958d450-0a9a-4191-b0e0-5989df2da551\") " pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" Nov 25 15:08:55 crc kubenswrapper[4731]: I1125 15:08:55.880965 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" Nov 25 15:08:56 crc kubenswrapper[4731]: I1125 15:08:56.097510 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85"] Nov 25 15:08:56 crc kubenswrapper[4731]: I1125 15:08:56.811216 4731 generic.go:334] "Generic (PLEG): container finished" podID="3958d450-0a9a-4191-b0e0-5989df2da551" containerID="5d725d99ac9fb26df1f3af51e79e05708b7116da9cf031790c00f15af3831717" exitCode=0 Nov 25 15:08:56 crc kubenswrapper[4731]: I1125 15:08:56.819671 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" event={"ID":"3958d450-0a9a-4191-b0e0-5989df2da551","Type":"ContainerDied","Data":"5d725d99ac9fb26df1f3af51e79e05708b7116da9cf031790c00f15af3831717"} Nov 25 15:08:56 crc kubenswrapper[4731]: I1125 15:08:56.819719 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" event={"ID":"3958d450-0a9a-4191-b0e0-5989df2da551","Type":"ContainerStarted","Data":"3f657180aea605e51bfb2a56ff6e43fd4e719c9102f3ea166f64e9f7b65f1eb5"} Nov 25 15:08:57 crc kubenswrapper[4731]: I1125 15:08:57.821479 4731 generic.go:334] "Generic (PLEG): container finished" podID="3958d450-0a9a-4191-b0e0-5989df2da551" containerID="1e9aa00f7eac464da388015c7719b958423023a4ac5a8e532dbe1005bc5ed928" exitCode=0 Nov 25 15:08:57 crc kubenswrapper[4731]: I1125 15:08:57.821564 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" event={"ID":"3958d450-0a9a-4191-b0e0-5989df2da551","Type":"ContainerDied","Data":"1e9aa00f7eac464da388015c7719b958423023a4ac5a8e532dbe1005bc5ed928"} Nov 25 15:08:58 crc kubenswrapper[4731]: I1125 15:08:58.831372 4731 generic.go:334] "Generic (PLEG): container finished" podID="3958d450-0a9a-4191-b0e0-5989df2da551" containerID="f15e968246730a4aa24ef3602678cb839394e942e7f55b912ab0acaf6fa0e366" exitCode=0 Nov 25 15:08:58 crc kubenswrapper[4731]: I1125 15:08:58.831431 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" event={"ID":"3958d450-0a9a-4191-b0e0-5989df2da551","Type":"ContainerDied","Data":"f15e968246730a4aa24ef3602678cb839394e942e7f55b912ab0acaf6fa0e366"} Nov 25 15:08:59 crc kubenswrapper[4731]: I1125 15:08:59.935604 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9vf2q"] Nov 25 15:08:59 crc kubenswrapper[4731]: I1125 15:08:59.939110 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:08:59 crc kubenswrapper[4731]: I1125 15:08:59.963979 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9vf2q"] Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.092600 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.107352 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glssv\" (UniqueName: \"kubernetes.io/projected/28881c92-a500-4d8b-8982-0e9aa793cf84-kube-api-access-glssv\") pod \"certified-operators-9vf2q\" (UID: \"28881c92-a500-4d8b-8982-0e9aa793cf84\") " pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.107625 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28881c92-a500-4d8b-8982-0e9aa793cf84-catalog-content\") pod \"certified-operators-9vf2q\" (UID: \"28881c92-a500-4d8b-8982-0e9aa793cf84\") " pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.107664 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28881c92-a500-4d8b-8982-0e9aa793cf84-utilities\") pod \"certified-operators-9vf2q\" (UID: \"28881c92-a500-4d8b-8982-0e9aa793cf84\") " pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.209809 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3958d450-0a9a-4191-b0e0-5989df2da551-util\") pod \"3958d450-0a9a-4191-b0e0-5989df2da551\" (UID: \"3958d450-0a9a-4191-b0e0-5989df2da551\") " Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.210036 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3958d450-0a9a-4191-b0e0-5989df2da551-bundle\") pod \"3958d450-0a9a-4191-b0e0-5989df2da551\" (UID: \"3958d450-0a9a-4191-b0e0-5989df2da551\") " Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.210088 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdwrp\" (UniqueName: \"kubernetes.io/projected/3958d450-0a9a-4191-b0e0-5989df2da551-kube-api-access-sdwrp\") pod \"3958d450-0a9a-4191-b0e0-5989df2da551\" (UID: \"3958d450-0a9a-4191-b0e0-5989df2da551\") " Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.210422 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glssv\" (UniqueName: \"kubernetes.io/projected/28881c92-a500-4d8b-8982-0e9aa793cf84-kube-api-access-glssv\") pod \"certified-operators-9vf2q\" (UID: \"28881c92-a500-4d8b-8982-0e9aa793cf84\") " pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.210489 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28881c92-a500-4d8b-8982-0e9aa793cf84-catalog-content\") pod \"certified-operators-9vf2q\" (UID: \"28881c92-a500-4d8b-8982-0e9aa793cf84\") " pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.210526 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28881c92-a500-4d8b-8982-0e9aa793cf84-utilities\") pod \"certified-operators-9vf2q\" (UID: \"28881c92-a500-4d8b-8982-0e9aa793cf84\") " pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.211350 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3958d450-0a9a-4191-b0e0-5989df2da551-bundle" (OuterVolumeSpecName: "bundle") pod "3958d450-0a9a-4191-b0e0-5989df2da551" (UID: "3958d450-0a9a-4191-b0e0-5989df2da551"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.211422 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28881c92-a500-4d8b-8982-0e9aa793cf84-utilities\") pod \"certified-operators-9vf2q\" (UID: \"28881c92-a500-4d8b-8982-0e9aa793cf84\") " pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.211450 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28881c92-a500-4d8b-8982-0e9aa793cf84-catalog-content\") pod \"certified-operators-9vf2q\" (UID: \"28881c92-a500-4d8b-8982-0e9aa793cf84\") " pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.228494 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3958d450-0a9a-4191-b0e0-5989df2da551-kube-api-access-sdwrp" (OuterVolumeSpecName: "kube-api-access-sdwrp") pod "3958d450-0a9a-4191-b0e0-5989df2da551" (UID: "3958d450-0a9a-4191-b0e0-5989df2da551"). InnerVolumeSpecName "kube-api-access-sdwrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.232766 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glssv\" (UniqueName: \"kubernetes.io/projected/28881c92-a500-4d8b-8982-0e9aa793cf84-kube-api-access-glssv\") pod \"certified-operators-9vf2q\" (UID: \"28881c92-a500-4d8b-8982-0e9aa793cf84\") " pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.235199 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3958d450-0a9a-4191-b0e0-5989df2da551-util" (OuterVolumeSpecName: "util") pod "3958d450-0a9a-4191-b0e0-5989df2da551" (UID: "3958d450-0a9a-4191-b0e0-5989df2da551"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.269750 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.313339 4731 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3958d450-0a9a-4191-b0e0-5989df2da551-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.314249 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdwrp\" (UniqueName: \"kubernetes.io/projected/3958d450-0a9a-4191-b0e0-5989df2da551-kube-api-access-sdwrp\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.314349 4731 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3958d450-0a9a-4191-b0e0-5989df2da551-util\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.755439 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9vf2q"] Nov 25 15:09:00 crc kubenswrapper[4731]: W1125 15:09:00.760665 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28881c92_a500_4d8b_8982_0e9aa793cf84.slice/crio-79ecb1e4cc84716c8f3b1076101dfe760d2f3eeaf76249e7bba43f526c43e96a WatchSource:0}: Error finding container 79ecb1e4cc84716c8f3b1076101dfe760d2f3eeaf76249e7bba43f526c43e96a: Status 404 returned error can't find the container with id 79ecb1e4cc84716c8f3b1076101dfe760d2f3eeaf76249e7bba43f526c43e96a Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.850949 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.850913 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85" event={"ID":"3958d450-0a9a-4191-b0e0-5989df2da551","Type":"ContainerDied","Data":"3f657180aea605e51bfb2a56ff6e43fd4e719c9102f3ea166f64e9f7b65f1eb5"} Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.851365 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f657180aea605e51bfb2a56ff6e43fd4e719c9102f3ea166f64e9f7b65f1eb5" Nov 25 15:09:00 crc kubenswrapper[4731]: I1125 15:09:00.852597 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vf2q" event={"ID":"28881c92-a500-4d8b-8982-0e9aa793cf84","Type":"ContainerStarted","Data":"79ecb1e4cc84716c8f3b1076101dfe760d2f3eeaf76249e7bba43f526c43e96a"} Nov 25 15:09:01 crc kubenswrapper[4731]: I1125 15:09:01.863131 4731 generic.go:334] "Generic (PLEG): container finished" podID="28881c92-a500-4d8b-8982-0e9aa793cf84" containerID="617f3cf790c14ac79f680e4803e057e04bf56a7e9b7fafed50f9ece22ce8b1a4" exitCode=0 Nov 25 15:09:01 crc kubenswrapper[4731]: I1125 15:09:01.863263 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vf2q" event={"ID":"28881c92-a500-4d8b-8982-0e9aa793cf84","Type":"ContainerDied","Data":"617f3cf790c14ac79f680e4803e057e04bf56a7e9b7fafed50f9ece22ce8b1a4"} Nov 25 15:09:02 crc kubenswrapper[4731]: I1125 15:09:02.873562 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vf2q" event={"ID":"28881c92-a500-4d8b-8982-0e9aa793cf84","Type":"ContainerStarted","Data":"72b9a78215a7dc2f1610f5130dda14e96c01fbf8797e50ce7b2de37d56a58bfa"} Nov 25 15:09:03 crc kubenswrapper[4731]: I1125 15:09:03.881342 4731 generic.go:334] "Generic (PLEG): container finished" podID="28881c92-a500-4d8b-8982-0e9aa793cf84" containerID="72b9a78215a7dc2f1610f5130dda14e96c01fbf8797e50ce7b2de37d56a58bfa" exitCode=0 Nov 25 15:09:03 crc kubenswrapper[4731]: I1125 15:09:03.881399 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vf2q" event={"ID":"28881c92-a500-4d8b-8982-0e9aa793cf84","Type":"ContainerDied","Data":"72b9a78215a7dc2f1610f5130dda14e96c01fbf8797e50ce7b2de37d56a58bfa"} Nov 25 15:09:04 crc kubenswrapper[4731]: I1125 15:09:04.890408 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vf2q" event={"ID":"28881c92-a500-4d8b-8982-0e9aa793cf84","Type":"ContainerStarted","Data":"07f98050d042bcc2f60da5f63c2eb29d4bc00fc0d808f2c5425ea58e6b7ef000"} Nov 25 15:09:04 crc kubenswrapper[4731]: I1125 15:09:04.910678 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9vf2q" podStartSLOduration=3.5004688330000002 podStartE2EDuration="5.910657919s" podCreationTimestamp="2025-11-25 15:08:59 +0000 UTC" firstStartedPulling="2025-11-25 15:09:01.864771675 +0000 UTC m=+900.611906095" lastFinishedPulling="2025-11-25 15:09:04.274960751 +0000 UTC m=+903.022095181" observedRunningTime="2025-11-25 15:09:04.908863615 +0000 UTC m=+903.655998045" watchObservedRunningTime="2025-11-25 15:09:04.910657919 +0000 UTC m=+903.657792349" Nov 25 15:09:08 crc kubenswrapper[4731]: I1125 15:09:08.442008 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-f546cbb6c-wz8k8"] Nov 25 15:09:08 crc kubenswrapper[4731]: E1125 15:09:08.442567 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3958d450-0a9a-4191-b0e0-5989df2da551" containerName="pull" Nov 25 15:09:08 crc kubenswrapper[4731]: I1125 15:09:08.442581 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3958d450-0a9a-4191-b0e0-5989df2da551" containerName="pull" Nov 25 15:09:08 crc kubenswrapper[4731]: E1125 15:09:08.442596 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3958d450-0a9a-4191-b0e0-5989df2da551" containerName="util" Nov 25 15:09:08 crc kubenswrapper[4731]: I1125 15:09:08.442603 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3958d450-0a9a-4191-b0e0-5989df2da551" containerName="util" Nov 25 15:09:08 crc kubenswrapper[4731]: E1125 15:09:08.442624 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3958d450-0a9a-4191-b0e0-5989df2da551" containerName="extract" Nov 25 15:09:08 crc kubenswrapper[4731]: I1125 15:09:08.442632 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3958d450-0a9a-4191-b0e0-5989df2da551" containerName="extract" Nov 25 15:09:08 crc kubenswrapper[4731]: I1125 15:09:08.442756 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="3958d450-0a9a-4191-b0e0-5989df2da551" containerName="extract" Nov 25 15:09:08 crc kubenswrapper[4731]: I1125 15:09:08.443231 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-f546cbb6c-wz8k8" Nov 25 15:09:08 crc kubenswrapper[4731]: I1125 15:09:08.445365 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-dnqtw" Nov 25 15:09:08 crc kubenswrapper[4731]: I1125 15:09:08.468580 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-f546cbb6c-wz8k8"] Nov 25 15:09:08 crc kubenswrapper[4731]: I1125 15:09:08.638813 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhgwq\" (UniqueName: \"kubernetes.io/projected/c2b163b6-05e5-4c6e-95f9-2748e85eb037-kube-api-access-vhgwq\") pod \"openstack-operator-controller-operator-f546cbb6c-wz8k8\" (UID: \"c2b163b6-05e5-4c6e-95f9-2748e85eb037\") " pod="openstack-operators/openstack-operator-controller-operator-f546cbb6c-wz8k8" Nov 25 15:09:08 crc kubenswrapper[4731]: I1125 15:09:08.740442 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhgwq\" (UniqueName: \"kubernetes.io/projected/c2b163b6-05e5-4c6e-95f9-2748e85eb037-kube-api-access-vhgwq\") pod \"openstack-operator-controller-operator-f546cbb6c-wz8k8\" (UID: \"c2b163b6-05e5-4c6e-95f9-2748e85eb037\") " pod="openstack-operators/openstack-operator-controller-operator-f546cbb6c-wz8k8" Nov 25 15:09:08 crc kubenswrapper[4731]: I1125 15:09:08.765978 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhgwq\" (UniqueName: \"kubernetes.io/projected/c2b163b6-05e5-4c6e-95f9-2748e85eb037-kube-api-access-vhgwq\") pod \"openstack-operator-controller-operator-f546cbb6c-wz8k8\" (UID: \"c2b163b6-05e5-4c6e-95f9-2748e85eb037\") " pod="openstack-operators/openstack-operator-controller-operator-f546cbb6c-wz8k8" Nov 25 15:09:08 crc kubenswrapper[4731]: I1125 15:09:08.771582 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-f546cbb6c-wz8k8" Nov 25 15:09:09 crc kubenswrapper[4731]: I1125 15:09:09.221352 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-f546cbb6c-wz8k8"] Nov 25 15:09:09 crc kubenswrapper[4731]: W1125 15:09:09.224510 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2b163b6_05e5_4c6e_95f9_2748e85eb037.slice/crio-497ac2aa4c9abc45f309280c2286e718997762cb5dccc8b6c0906a71b3394d8d WatchSource:0}: Error finding container 497ac2aa4c9abc45f309280c2286e718997762cb5dccc8b6c0906a71b3394d8d: Status 404 returned error can't find the container with id 497ac2aa4c9abc45f309280c2286e718997762cb5dccc8b6c0906a71b3394d8d Nov 25 15:09:09 crc kubenswrapper[4731]: I1125 15:09:09.535262 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pgx8x"] Nov 25 15:09:09 crc kubenswrapper[4731]: I1125 15:09:09.536626 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:09 crc kubenswrapper[4731]: I1125 15:09:09.550180 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pgx8x"] Nov 25 15:09:09 crc kubenswrapper[4731]: I1125 15:09:09.552193 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x98p\" (UniqueName: \"kubernetes.io/projected/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-kube-api-access-7x98p\") pod \"community-operators-pgx8x\" (UID: \"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94\") " pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:09 crc kubenswrapper[4731]: I1125 15:09:09.552261 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-utilities\") pod \"community-operators-pgx8x\" (UID: \"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94\") " pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:09 crc kubenswrapper[4731]: I1125 15:09:09.552329 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-catalog-content\") pod \"community-operators-pgx8x\" (UID: \"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94\") " pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:09 crc kubenswrapper[4731]: I1125 15:09:09.653443 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x98p\" (UniqueName: \"kubernetes.io/projected/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-kube-api-access-7x98p\") pod \"community-operators-pgx8x\" (UID: \"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94\") " pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:09 crc kubenswrapper[4731]: I1125 15:09:09.653522 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-utilities\") pod \"community-operators-pgx8x\" (UID: \"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94\") " pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:09 crc kubenswrapper[4731]: I1125 15:09:09.653630 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-catalog-content\") pod \"community-operators-pgx8x\" (UID: \"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94\") " pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:09 crc kubenswrapper[4731]: I1125 15:09:09.654466 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-utilities\") pod \"community-operators-pgx8x\" (UID: \"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94\") " pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:09 crc kubenswrapper[4731]: I1125 15:09:09.654511 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-catalog-content\") pod \"community-operators-pgx8x\" (UID: \"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94\") " pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:09 crc kubenswrapper[4731]: I1125 15:09:09.673525 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x98p\" (UniqueName: \"kubernetes.io/projected/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-kube-api-access-7x98p\") pod \"community-operators-pgx8x\" (UID: \"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94\") " pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:09 crc kubenswrapper[4731]: I1125 15:09:09.856260 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:09 crc kubenswrapper[4731]: I1125 15:09:09.937121 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-f546cbb6c-wz8k8" event={"ID":"c2b163b6-05e5-4c6e-95f9-2748e85eb037","Type":"ContainerStarted","Data":"497ac2aa4c9abc45f309280c2286e718997762cb5dccc8b6c0906a71b3394d8d"} Nov 25 15:09:10 crc kubenswrapper[4731]: I1125 15:09:10.185605 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pgx8x"] Nov 25 15:09:10 crc kubenswrapper[4731]: I1125 15:09:10.270954 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:09:10 crc kubenswrapper[4731]: I1125 15:09:10.272693 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:09:10 crc kubenswrapper[4731]: I1125 15:09:10.393767 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:09:10 crc kubenswrapper[4731]: I1125 15:09:10.952860 4731 generic.go:334] "Generic (PLEG): container finished" podID="6188cbdb-0cf1-4a01-b0fc-68262a5ffa94" containerID="3c69198166a148f8c92a1fb80f33957bd1799718d39990a3f7af5f7dafafa1ca" exitCode=0 Nov 25 15:09:10 crc kubenswrapper[4731]: I1125 15:09:10.952949 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgx8x" event={"ID":"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94","Type":"ContainerDied","Data":"3c69198166a148f8c92a1fb80f33957bd1799718d39990a3f7af5f7dafafa1ca"} Nov 25 15:09:10 crc kubenswrapper[4731]: I1125 15:09:10.953007 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgx8x" event={"ID":"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94","Type":"ContainerStarted","Data":"97bb2baa10c6cca2e86533a1a1f66e8873d92b778d1e2be77f33106e5ddda00d"} Nov 25 15:09:10 crc kubenswrapper[4731]: I1125 15:09:10.998905 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:09:13 crc kubenswrapper[4731]: I1125 15:09:13.922930 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9vf2q"] Nov 25 15:09:13 crc kubenswrapper[4731]: I1125 15:09:13.973489 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9vf2q" podUID="28881c92-a500-4d8b-8982-0e9aa793cf84" containerName="registry-server" containerID="cri-o://07f98050d042bcc2f60da5f63c2eb29d4bc00fc0d808f2c5425ea58e6b7ef000" gracePeriod=2 Nov 25 15:09:15 crc kubenswrapper[4731]: I1125 15:09:15.987860 4731 generic.go:334] "Generic (PLEG): container finished" podID="28881c92-a500-4d8b-8982-0e9aa793cf84" containerID="07f98050d042bcc2f60da5f63c2eb29d4bc00fc0d808f2c5425ea58e6b7ef000" exitCode=0 Nov 25 15:09:15 crc kubenswrapper[4731]: I1125 15:09:15.988737 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vf2q" event={"ID":"28881c92-a500-4d8b-8982-0e9aa793cf84","Type":"ContainerDied","Data":"07f98050d042bcc2f60da5f63c2eb29d4bc00fc0d808f2c5425ea58e6b7ef000"} Nov 25 15:09:15 crc kubenswrapper[4731]: I1125 15:09:15.988776 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9vf2q" event={"ID":"28881c92-a500-4d8b-8982-0e9aa793cf84","Type":"ContainerDied","Data":"79ecb1e4cc84716c8f3b1076101dfe760d2f3eeaf76249e7bba43f526c43e96a"} Nov 25 15:09:15 crc kubenswrapper[4731]: I1125 15:09:15.988788 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79ecb1e4cc84716c8f3b1076101dfe760d2f3eeaf76249e7bba43f526c43e96a" Nov 25 15:09:16 crc kubenswrapper[4731]: I1125 15:09:16.020793 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:09:16 crc kubenswrapper[4731]: I1125 15:09:16.183610 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28881c92-a500-4d8b-8982-0e9aa793cf84-utilities\") pod \"28881c92-a500-4d8b-8982-0e9aa793cf84\" (UID: \"28881c92-a500-4d8b-8982-0e9aa793cf84\") " Nov 25 15:09:16 crc kubenswrapper[4731]: I1125 15:09:16.183694 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28881c92-a500-4d8b-8982-0e9aa793cf84-catalog-content\") pod \"28881c92-a500-4d8b-8982-0e9aa793cf84\" (UID: \"28881c92-a500-4d8b-8982-0e9aa793cf84\") " Nov 25 15:09:16 crc kubenswrapper[4731]: I1125 15:09:16.183873 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glssv\" (UniqueName: \"kubernetes.io/projected/28881c92-a500-4d8b-8982-0e9aa793cf84-kube-api-access-glssv\") pod \"28881c92-a500-4d8b-8982-0e9aa793cf84\" (UID: \"28881c92-a500-4d8b-8982-0e9aa793cf84\") " Nov 25 15:09:16 crc kubenswrapper[4731]: I1125 15:09:16.184891 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28881c92-a500-4d8b-8982-0e9aa793cf84-utilities" (OuterVolumeSpecName: "utilities") pod "28881c92-a500-4d8b-8982-0e9aa793cf84" (UID: "28881c92-a500-4d8b-8982-0e9aa793cf84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:09:16 crc kubenswrapper[4731]: I1125 15:09:16.192526 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28881c92-a500-4d8b-8982-0e9aa793cf84-kube-api-access-glssv" (OuterVolumeSpecName: "kube-api-access-glssv") pod "28881c92-a500-4d8b-8982-0e9aa793cf84" (UID: "28881c92-a500-4d8b-8982-0e9aa793cf84"). InnerVolumeSpecName "kube-api-access-glssv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:09:16 crc kubenswrapper[4731]: I1125 15:09:16.240096 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28881c92-a500-4d8b-8982-0e9aa793cf84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28881c92-a500-4d8b-8982-0e9aa793cf84" (UID: "28881c92-a500-4d8b-8982-0e9aa793cf84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:09:16 crc kubenswrapper[4731]: I1125 15:09:16.285877 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glssv\" (UniqueName: \"kubernetes.io/projected/28881c92-a500-4d8b-8982-0e9aa793cf84-kube-api-access-glssv\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:16 crc kubenswrapper[4731]: I1125 15:09:16.285930 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28881c92-a500-4d8b-8982-0e9aa793cf84-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:16 crc kubenswrapper[4731]: I1125 15:09:16.285945 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28881c92-a500-4d8b-8982-0e9aa793cf84-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:16 crc kubenswrapper[4731]: I1125 15:09:16.997876 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-f546cbb6c-wz8k8" event={"ID":"c2b163b6-05e5-4c6e-95f9-2748e85eb037","Type":"ContainerStarted","Data":"4d70a3ea3dfb0328261cfdbe8a7f04d8f332e286e22d752970d0c57ce78ffc14"} Nov 25 15:09:16 crc kubenswrapper[4731]: I1125 15:09:16.998342 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-f546cbb6c-wz8k8" Nov 25 15:09:16 crc kubenswrapper[4731]: I1125 15:09:16.999738 4731 generic.go:334] "Generic (PLEG): container finished" podID="6188cbdb-0cf1-4a01-b0fc-68262a5ffa94" containerID="f8dcd91b73d41d1335a80191e8ad9f51a8d863d54f39fe14446a7ea7ab571a8c" exitCode=0 Nov 25 15:09:16 crc kubenswrapper[4731]: I1125 15:09:16.999775 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgx8x" event={"ID":"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94","Type":"ContainerDied","Data":"f8dcd91b73d41d1335a80191e8ad9f51a8d863d54f39fe14446a7ea7ab571a8c"} Nov 25 15:09:16 crc kubenswrapper[4731]: I1125 15:09:16.999883 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9vf2q" Nov 25 15:09:17 crc kubenswrapper[4731]: I1125 15:09:17.038638 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-f546cbb6c-wz8k8" podStartSLOduration=2.480175222 podStartE2EDuration="9.03860991s" podCreationTimestamp="2025-11-25 15:09:08 +0000 UTC" firstStartedPulling="2025-11-25 15:09:09.227451676 +0000 UTC m=+907.974586106" lastFinishedPulling="2025-11-25 15:09:15.785886364 +0000 UTC m=+914.533020794" observedRunningTime="2025-11-25 15:09:17.027855996 +0000 UTC m=+915.774990426" watchObservedRunningTime="2025-11-25 15:09:17.03860991 +0000 UTC m=+915.785744340" Nov 25 15:09:17 crc kubenswrapper[4731]: I1125 15:09:17.059074 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9vf2q"] Nov 25 15:09:17 crc kubenswrapper[4731]: I1125 15:09:17.066934 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9vf2q"] Nov 25 15:09:18 crc kubenswrapper[4731]: I1125 15:09:18.021465 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgx8x" event={"ID":"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94","Type":"ContainerStarted","Data":"a75d538ead59cc1dfe3dac31938ac0905d990894b721f4df218e0005697c8674"} Nov 25 15:09:18 crc kubenswrapper[4731]: I1125 15:09:18.054365 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pgx8x" podStartSLOduration=2.621313841 podStartE2EDuration="9.054348152s" podCreationTimestamp="2025-11-25 15:09:09 +0000 UTC" firstStartedPulling="2025-11-25 15:09:10.956499769 +0000 UTC m=+909.703634199" lastFinishedPulling="2025-11-25 15:09:17.38953406 +0000 UTC m=+916.136668510" observedRunningTime="2025-11-25 15:09:18.049114374 +0000 UTC m=+916.796248824" watchObservedRunningTime="2025-11-25 15:09:18.054348152 +0000 UTC m=+916.801482572" Nov 25 15:09:18 crc kubenswrapper[4731]: I1125 15:09:18.819942 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28881c92-a500-4d8b-8982-0e9aa793cf84" path="/var/lib/kubelet/pods/28881c92-a500-4d8b-8982-0e9aa793cf84/volumes" Nov 25 15:09:19 crc kubenswrapper[4731]: I1125 15:09:19.856479 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:19 crc kubenswrapper[4731]: I1125 15:09:19.856980 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:19 crc kubenswrapper[4731]: I1125 15:09:19.911665 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:22 crc kubenswrapper[4731]: I1125 15:09:22.931275 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vssp9"] Nov 25 15:09:22 crc kubenswrapper[4731]: E1125 15:09:22.931667 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28881c92-a500-4d8b-8982-0e9aa793cf84" containerName="extract-content" Nov 25 15:09:22 crc kubenswrapper[4731]: I1125 15:09:22.931688 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="28881c92-a500-4d8b-8982-0e9aa793cf84" containerName="extract-content" Nov 25 15:09:22 crc kubenswrapper[4731]: E1125 15:09:22.931707 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28881c92-a500-4d8b-8982-0e9aa793cf84" containerName="registry-server" Nov 25 15:09:22 crc kubenswrapper[4731]: I1125 15:09:22.931722 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="28881c92-a500-4d8b-8982-0e9aa793cf84" containerName="registry-server" Nov 25 15:09:22 crc kubenswrapper[4731]: E1125 15:09:22.931742 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28881c92-a500-4d8b-8982-0e9aa793cf84" containerName="extract-utilities" Nov 25 15:09:22 crc kubenswrapper[4731]: I1125 15:09:22.931753 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="28881c92-a500-4d8b-8982-0e9aa793cf84" containerName="extract-utilities" Nov 25 15:09:22 crc kubenswrapper[4731]: I1125 15:09:22.931965 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="28881c92-a500-4d8b-8982-0e9aa793cf84" containerName="registry-server" Nov 25 15:09:22 crc kubenswrapper[4731]: I1125 15:09:22.933309 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:22 crc kubenswrapper[4731]: I1125 15:09:22.971459 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vssp9"] Nov 25 15:09:23 crc kubenswrapper[4731]: I1125 15:09:23.091530 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9cxl\" (UniqueName: \"kubernetes.io/projected/735529b8-3a25-4535-895f-5e5e8f410cc6-kube-api-access-n9cxl\") pod \"redhat-marketplace-vssp9\" (UID: \"735529b8-3a25-4535-895f-5e5e8f410cc6\") " pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:23 crc kubenswrapper[4731]: I1125 15:09:23.091608 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735529b8-3a25-4535-895f-5e5e8f410cc6-catalog-content\") pod \"redhat-marketplace-vssp9\" (UID: \"735529b8-3a25-4535-895f-5e5e8f410cc6\") " pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:23 crc kubenswrapper[4731]: I1125 15:09:23.091758 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735529b8-3a25-4535-895f-5e5e8f410cc6-utilities\") pod \"redhat-marketplace-vssp9\" (UID: \"735529b8-3a25-4535-895f-5e5e8f410cc6\") " pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:23 crc kubenswrapper[4731]: I1125 15:09:23.193088 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735529b8-3a25-4535-895f-5e5e8f410cc6-utilities\") pod \"redhat-marketplace-vssp9\" (UID: \"735529b8-3a25-4535-895f-5e5e8f410cc6\") " pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:23 crc kubenswrapper[4731]: I1125 15:09:23.193199 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9cxl\" (UniqueName: \"kubernetes.io/projected/735529b8-3a25-4535-895f-5e5e8f410cc6-kube-api-access-n9cxl\") pod \"redhat-marketplace-vssp9\" (UID: \"735529b8-3a25-4535-895f-5e5e8f410cc6\") " pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:23 crc kubenswrapper[4731]: I1125 15:09:23.193227 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735529b8-3a25-4535-895f-5e5e8f410cc6-catalog-content\") pod \"redhat-marketplace-vssp9\" (UID: \"735529b8-3a25-4535-895f-5e5e8f410cc6\") " pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:23 crc kubenswrapper[4731]: I1125 15:09:23.193688 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735529b8-3a25-4535-895f-5e5e8f410cc6-catalog-content\") pod \"redhat-marketplace-vssp9\" (UID: \"735529b8-3a25-4535-895f-5e5e8f410cc6\") " pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:23 crc kubenswrapper[4731]: I1125 15:09:23.193709 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735529b8-3a25-4535-895f-5e5e8f410cc6-utilities\") pod \"redhat-marketplace-vssp9\" (UID: \"735529b8-3a25-4535-895f-5e5e8f410cc6\") " pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:23 crc kubenswrapper[4731]: I1125 15:09:23.222834 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9cxl\" (UniqueName: \"kubernetes.io/projected/735529b8-3a25-4535-895f-5e5e8f410cc6-kube-api-access-n9cxl\") pod \"redhat-marketplace-vssp9\" (UID: \"735529b8-3a25-4535-895f-5e5e8f410cc6\") " pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:23 crc kubenswrapper[4731]: I1125 15:09:23.252826 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:23 crc kubenswrapper[4731]: I1125 15:09:23.513926 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vssp9"] Nov 25 15:09:24 crc kubenswrapper[4731]: I1125 15:09:24.060949 4731 generic.go:334] "Generic (PLEG): container finished" podID="735529b8-3a25-4535-895f-5e5e8f410cc6" containerID="4fd73e6dc044a088a741df3e28778efab2cd549d019824a9545f82b925a1a178" exitCode=0 Nov 25 15:09:24 crc kubenswrapper[4731]: I1125 15:09:24.061038 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vssp9" event={"ID":"735529b8-3a25-4535-895f-5e5e8f410cc6","Type":"ContainerDied","Data":"4fd73e6dc044a088a741df3e28778efab2cd549d019824a9545f82b925a1a178"} Nov 25 15:09:24 crc kubenswrapper[4731]: I1125 15:09:24.061352 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vssp9" event={"ID":"735529b8-3a25-4535-895f-5e5e8f410cc6","Type":"ContainerStarted","Data":"ef5f738551fb1e0fb49ecf9aec33f5a8be86e3c8a8205bfbd520e5ba80bd44b7"} Nov 25 15:09:25 crc kubenswrapper[4731]: I1125 15:09:25.068689 4731 generic.go:334] "Generic (PLEG): container finished" podID="735529b8-3a25-4535-895f-5e5e8f410cc6" containerID="dd342a19af75e95838e87bc0f2d5a9b42181a04da96e9bf8d83e935d0d2857e5" exitCode=0 Nov 25 15:09:25 crc kubenswrapper[4731]: I1125 15:09:25.068752 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vssp9" event={"ID":"735529b8-3a25-4535-895f-5e5e8f410cc6","Type":"ContainerDied","Data":"dd342a19af75e95838e87bc0f2d5a9b42181a04da96e9bf8d83e935d0d2857e5"} Nov 25 15:09:26 crc kubenswrapper[4731]: I1125 15:09:26.079525 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vssp9" event={"ID":"735529b8-3a25-4535-895f-5e5e8f410cc6","Type":"ContainerStarted","Data":"990e77736de90743bbfb978cb9bc50f0790a194ff6de86dde6ea48bd7d31d7be"} Nov 25 15:09:26 crc kubenswrapper[4731]: I1125 15:09:26.104420 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vssp9" podStartSLOduration=2.5963484169999997 podStartE2EDuration="4.104396749s" podCreationTimestamp="2025-11-25 15:09:22 +0000 UTC" firstStartedPulling="2025-11-25 15:09:24.063142584 +0000 UTC m=+922.810277014" lastFinishedPulling="2025-11-25 15:09:25.571190926 +0000 UTC m=+924.318325346" observedRunningTime="2025-11-25 15:09:26.099258483 +0000 UTC m=+924.846392913" watchObservedRunningTime="2025-11-25 15:09:26.104396749 +0000 UTC m=+924.851531179" Nov 25 15:09:28 crc kubenswrapper[4731]: I1125 15:09:28.774131 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-f546cbb6c-wz8k8" Nov 25 15:09:29 crc kubenswrapper[4731]: I1125 15:09:29.901721 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:30 crc kubenswrapper[4731]: I1125 15:09:30.133080 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pgx8x"] Nov 25 15:09:30 crc kubenswrapper[4731]: I1125 15:09:30.133397 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pgx8x" podUID="6188cbdb-0cf1-4a01-b0fc-68262a5ffa94" containerName="registry-server" containerID="cri-o://a75d538ead59cc1dfe3dac31938ac0905d990894b721f4df218e0005697c8674" gracePeriod=2 Nov 25 15:09:30 crc kubenswrapper[4731]: I1125 15:09:30.659545 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:30 crc kubenswrapper[4731]: I1125 15:09:30.732672 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-catalog-content\") pod \"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94\" (UID: \"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94\") " Nov 25 15:09:30 crc kubenswrapper[4731]: I1125 15:09:30.732724 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-utilities\") pod \"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94\" (UID: \"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94\") " Nov 25 15:09:30 crc kubenswrapper[4731]: I1125 15:09:30.732931 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x98p\" (UniqueName: \"kubernetes.io/projected/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-kube-api-access-7x98p\") pod \"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94\" (UID: \"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94\") " Nov 25 15:09:30 crc kubenswrapper[4731]: I1125 15:09:30.733805 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-utilities" (OuterVolumeSpecName: "utilities") pod "6188cbdb-0cf1-4a01-b0fc-68262a5ffa94" (UID: "6188cbdb-0cf1-4a01-b0fc-68262a5ffa94"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:09:30 crc kubenswrapper[4731]: I1125 15:09:30.749647 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-kube-api-access-7x98p" (OuterVolumeSpecName: "kube-api-access-7x98p") pod "6188cbdb-0cf1-4a01-b0fc-68262a5ffa94" (UID: "6188cbdb-0cf1-4a01-b0fc-68262a5ffa94"). InnerVolumeSpecName "kube-api-access-7x98p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:09:30 crc kubenswrapper[4731]: I1125 15:09:30.785974 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6188cbdb-0cf1-4a01-b0fc-68262a5ffa94" (UID: "6188cbdb-0cf1-4a01-b0fc-68262a5ffa94"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:09:30 crc kubenswrapper[4731]: I1125 15:09:30.835708 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x98p\" (UniqueName: \"kubernetes.io/projected/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-kube-api-access-7x98p\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:30 crc kubenswrapper[4731]: I1125 15:09:30.835755 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:30 crc kubenswrapper[4731]: I1125 15:09:30.835765 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:31 crc kubenswrapper[4731]: I1125 15:09:31.120164 4731 generic.go:334] "Generic (PLEG): container finished" podID="6188cbdb-0cf1-4a01-b0fc-68262a5ffa94" containerID="a75d538ead59cc1dfe3dac31938ac0905d990894b721f4df218e0005697c8674" exitCode=0 Nov 25 15:09:31 crc kubenswrapper[4731]: I1125 15:09:31.120217 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgx8x" event={"ID":"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94","Type":"ContainerDied","Data":"a75d538ead59cc1dfe3dac31938ac0905d990894b721f4df218e0005697c8674"} Nov 25 15:09:31 crc kubenswrapper[4731]: I1125 15:09:31.120265 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pgx8x" event={"ID":"6188cbdb-0cf1-4a01-b0fc-68262a5ffa94","Type":"ContainerDied","Data":"97bb2baa10c6cca2e86533a1a1f66e8873d92b778d1e2be77f33106e5ddda00d"} Nov 25 15:09:31 crc kubenswrapper[4731]: I1125 15:09:31.120283 4731 scope.go:117] "RemoveContainer" containerID="a75d538ead59cc1dfe3dac31938ac0905d990894b721f4df218e0005697c8674" Nov 25 15:09:31 crc kubenswrapper[4731]: I1125 15:09:31.120473 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pgx8x" Nov 25 15:09:31 crc kubenswrapper[4731]: I1125 15:09:31.150485 4731 scope.go:117] "RemoveContainer" containerID="f8dcd91b73d41d1335a80191e8ad9f51a8d863d54f39fe14446a7ea7ab571a8c" Nov 25 15:09:31 crc kubenswrapper[4731]: I1125 15:09:31.172916 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pgx8x"] Nov 25 15:09:31 crc kubenswrapper[4731]: I1125 15:09:31.182861 4731 scope.go:117] "RemoveContainer" containerID="3c69198166a148f8c92a1fb80f33957bd1799718d39990a3f7af5f7dafafa1ca" Nov 25 15:09:31 crc kubenswrapper[4731]: I1125 15:09:31.197161 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pgx8x"] Nov 25 15:09:31 crc kubenswrapper[4731]: I1125 15:09:31.223068 4731 scope.go:117] "RemoveContainer" containerID="a75d538ead59cc1dfe3dac31938ac0905d990894b721f4df218e0005697c8674" Nov 25 15:09:31 crc kubenswrapper[4731]: E1125 15:09:31.231901 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a75d538ead59cc1dfe3dac31938ac0905d990894b721f4df218e0005697c8674\": container with ID starting with a75d538ead59cc1dfe3dac31938ac0905d990894b721f4df218e0005697c8674 not found: ID does not exist" containerID="a75d538ead59cc1dfe3dac31938ac0905d990894b721f4df218e0005697c8674" Nov 25 15:09:31 crc kubenswrapper[4731]: I1125 15:09:31.231971 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75d538ead59cc1dfe3dac31938ac0905d990894b721f4df218e0005697c8674"} err="failed to get container status \"a75d538ead59cc1dfe3dac31938ac0905d990894b721f4df218e0005697c8674\": rpc error: code = NotFound desc = could not find container \"a75d538ead59cc1dfe3dac31938ac0905d990894b721f4df218e0005697c8674\": container with ID starting with a75d538ead59cc1dfe3dac31938ac0905d990894b721f4df218e0005697c8674 not found: ID does not exist" Nov 25 15:09:31 crc kubenswrapper[4731]: I1125 15:09:31.232020 4731 scope.go:117] "RemoveContainer" containerID="f8dcd91b73d41d1335a80191e8ad9f51a8d863d54f39fe14446a7ea7ab571a8c" Nov 25 15:09:31 crc kubenswrapper[4731]: E1125 15:09:31.239009 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8dcd91b73d41d1335a80191e8ad9f51a8d863d54f39fe14446a7ea7ab571a8c\": container with ID starting with f8dcd91b73d41d1335a80191e8ad9f51a8d863d54f39fe14446a7ea7ab571a8c not found: ID does not exist" containerID="f8dcd91b73d41d1335a80191e8ad9f51a8d863d54f39fe14446a7ea7ab571a8c" Nov 25 15:09:31 crc kubenswrapper[4731]: I1125 15:09:31.239068 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8dcd91b73d41d1335a80191e8ad9f51a8d863d54f39fe14446a7ea7ab571a8c"} err="failed to get container status \"f8dcd91b73d41d1335a80191e8ad9f51a8d863d54f39fe14446a7ea7ab571a8c\": rpc error: code = NotFound desc = could not find container \"f8dcd91b73d41d1335a80191e8ad9f51a8d863d54f39fe14446a7ea7ab571a8c\": container with ID starting with f8dcd91b73d41d1335a80191e8ad9f51a8d863d54f39fe14446a7ea7ab571a8c not found: ID does not exist" Nov 25 15:09:31 crc kubenswrapper[4731]: I1125 15:09:31.239099 4731 scope.go:117] "RemoveContainer" containerID="3c69198166a148f8c92a1fb80f33957bd1799718d39990a3f7af5f7dafafa1ca" Nov 25 15:09:31 crc kubenswrapper[4731]: E1125 15:09:31.239734 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c69198166a148f8c92a1fb80f33957bd1799718d39990a3f7af5f7dafafa1ca\": container with ID starting with 3c69198166a148f8c92a1fb80f33957bd1799718d39990a3f7af5f7dafafa1ca not found: ID does not exist" containerID="3c69198166a148f8c92a1fb80f33957bd1799718d39990a3f7af5f7dafafa1ca" Nov 25 15:09:31 crc kubenswrapper[4731]: I1125 15:09:31.239794 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c69198166a148f8c92a1fb80f33957bd1799718d39990a3f7af5f7dafafa1ca"} err="failed to get container status \"3c69198166a148f8c92a1fb80f33957bd1799718d39990a3f7af5f7dafafa1ca\": rpc error: code = NotFound desc = could not find container \"3c69198166a148f8c92a1fb80f33957bd1799718d39990a3f7af5f7dafafa1ca\": container with ID starting with 3c69198166a148f8c92a1fb80f33957bd1799718d39990a3f7af5f7dafafa1ca not found: ID does not exist" Nov 25 15:09:32 crc kubenswrapper[4731]: I1125 15:09:32.821152 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6188cbdb-0cf1-4a01-b0fc-68262a5ffa94" path="/var/lib/kubelet/pods/6188cbdb-0cf1-4a01-b0fc-68262a5ffa94/volumes" Nov 25 15:09:33 crc kubenswrapper[4731]: I1125 15:09:33.253098 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:33 crc kubenswrapper[4731]: I1125 15:09:33.253956 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:33 crc kubenswrapper[4731]: I1125 15:09:33.313484 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:34 crc kubenswrapper[4731]: I1125 15:09:34.188968 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:35 crc kubenswrapper[4731]: I1125 15:09:35.348321 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vssp9"] Nov 25 15:09:36 crc kubenswrapper[4731]: I1125 15:09:36.159047 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vssp9" podUID="735529b8-3a25-4535-895f-5e5e8f410cc6" containerName="registry-server" containerID="cri-o://990e77736de90743bbfb978cb9bc50f0790a194ff6de86dde6ea48bd7d31d7be" gracePeriod=2 Nov 25 15:09:36 crc kubenswrapper[4731]: I1125 15:09:36.562400 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:36 crc kubenswrapper[4731]: I1125 15:09:36.622316 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735529b8-3a25-4535-895f-5e5e8f410cc6-utilities\") pod \"735529b8-3a25-4535-895f-5e5e8f410cc6\" (UID: \"735529b8-3a25-4535-895f-5e5e8f410cc6\") " Nov 25 15:09:36 crc kubenswrapper[4731]: I1125 15:09:36.622406 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735529b8-3a25-4535-895f-5e5e8f410cc6-catalog-content\") pod \"735529b8-3a25-4535-895f-5e5e8f410cc6\" (UID: \"735529b8-3a25-4535-895f-5e5e8f410cc6\") " Nov 25 15:09:36 crc kubenswrapper[4731]: I1125 15:09:36.622465 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9cxl\" (UniqueName: \"kubernetes.io/projected/735529b8-3a25-4535-895f-5e5e8f410cc6-kube-api-access-n9cxl\") pod \"735529b8-3a25-4535-895f-5e5e8f410cc6\" (UID: \"735529b8-3a25-4535-895f-5e5e8f410cc6\") " Nov 25 15:09:36 crc kubenswrapper[4731]: I1125 15:09:36.623585 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/735529b8-3a25-4535-895f-5e5e8f410cc6-utilities" (OuterVolumeSpecName: "utilities") pod "735529b8-3a25-4535-895f-5e5e8f410cc6" (UID: "735529b8-3a25-4535-895f-5e5e8f410cc6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:09:36 crc kubenswrapper[4731]: I1125 15:09:36.628906 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/735529b8-3a25-4535-895f-5e5e8f410cc6-kube-api-access-n9cxl" (OuterVolumeSpecName: "kube-api-access-n9cxl") pod "735529b8-3a25-4535-895f-5e5e8f410cc6" (UID: "735529b8-3a25-4535-895f-5e5e8f410cc6"). InnerVolumeSpecName "kube-api-access-n9cxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:09:36 crc kubenswrapper[4731]: I1125 15:09:36.642637 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/735529b8-3a25-4535-895f-5e5e8f410cc6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "735529b8-3a25-4535-895f-5e5e8f410cc6" (UID: "735529b8-3a25-4535-895f-5e5e8f410cc6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:09:36 crc kubenswrapper[4731]: I1125 15:09:36.724748 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735529b8-3a25-4535-895f-5e5e8f410cc6-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:36 crc kubenswrapper[4731]: I1125 15:09:36.724788 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735529b8-3a25-4535-895f-5e5e8f410cc6-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:36 crc kubenswrapper[4731]: I1125 15:09:36.724804 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9cxl\" (UniqueName: \"kubernetes.io/projected/735529b8-3a25-4535-895f-5e5e8f410cc6-kube-api-access-n9cxl\") on node \"crc\" DevicePath \"\"" Nov 25 15:09:37 crc kubenswrapper[4731]: I1125 15:09:37.169612 4731 generic.go:334] "Generic (PLEG): container finished" podID="735529b8-3a25-4535-895f-5e5e8f410cc6" containerID="990e77736de90743bbfb978cb9bc50f0790a194ff6de86dde6ea48bd7d31d7be" exitCode=0 Nov 25 15:09:37 crc kubenswrapper[4731]: I1125 15:09:37.169697 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vssp9" Nov 25 15:09:37 crc kubenswrapper[4731]: I1125 15:09:37.169731 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vssp9" event={"ID":"735529b8-3a25-4535-895f-5e5e8f410cc6","Type":"ContainerDied","Data":"990e77736de90743bbfb978cb9bc50f0790a194ff6de86dde6ea48bd7d31d7be"} Nov 25 15:09:37 crc kubenswrapper[4731]: I1125 15:09:37.170402 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vssp9" event={"ID":"735529b8-3a25-4535-895f-5e5e8f410cc6","Type":"ContainerDied","Data":"ef5f738551fb1e0fb49ecf9aec33f5a8be86e3c8a8205bfbd520e5ba80bd44b7"} Nov 25 15:09:37 crc kubenswrapper[4731]: I1125 15:09:37.170435 4731 scope.go:117] "RemoveContainer" containerID="990e77736de90743bbfb978cb9bc50f0790a194ff6de86dde6ea48bd7d31d7be" Nov 25 15:09:37 crc kubenswrapper[4731]: I1125 15:09:37.198787 4731 scope.go:117] "RemoveContainer" containerID="dd342a19af75e95838e87bc0f2d5a9b42181a04da96e9bf8d83e935d0d2857e5" Nov 25 15:09:37 crc kubenswrapper[4731]: I1125 15:09:37.204252 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vssp9"] Nov 25 15:09:37 crc kubenswrapper[4731]: I1125 15:09:37.210327 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vssp9"] Nov 25 15:09:37 crc kubenswrapper[4731]: I1125 15:09:37.220972 4731 scope.go:117] "RemoveContainer" containerID="4fd73e6dc044a088a741df3e28778efab2cd549d019824a9545f82b925a1a178" Nov 25 15:09:37 crc kubenswrapper[4731]: I1125 15:09:37.240572 4731 scope.go:117] "RemoveContainer" containerID="990e77736de90743bbfb978cb9bc50f0790a194ff6de86dde6ea48bd7d31d7be" Nov 25 15:09:37 crc kubenswrapper[4731]: E1125 15:09:37.241127 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"990e77736de90743bbfb978cb9bc50f0790a194ff6de86dde6ea48bd7d31d7be\": container with ID starting with 990e77736de90743bbfb978cb9bc50f0790a194ff6de86dde6ea48bd7d31d7be not found: ID does not exist" containerID="990e77736de90743bbfb978cb9bc50f0790a194ff6de86dde6ea48bd7d31d7be" Nov 25 15:09:37 crc kubenswrapper[4731]: I1125 15:09:37.241188 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"990e77736de90743bbfb978cb9bc50f0790a194ff6de86dde6ea48bd7d31d7be"} err="failed to get container status \"990e77736de90743bbfb978cb9bc50f0790a194ff6de86dde6ea48bd7d31d7be\": rpc error: code = NotFound desc = could not find container \"990e77736de90743bbfb978cb9bc50f0790a194ff6de86dde6ea48bd7d31d7be\": container with ID starting with 990e77736de90743bbfb978cb9bc50f0790a194ff6de86dde6ea48bd7d31d7be not found: ID does not exist" Nov 25 15:09:37 crc kubenswrapper[4731]: I1125 15:09:37.241226 4731 scope.go:117] "RemoveContainer" containerID="dd342a19af75e95838e87bc0f2d5a9b42181a04da96e9bf8d83e935d0d2857e5" Nov 25 15:09:37 crc kubenswrapper[4731]: E1125 15:09:37.241530 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd342a19af75e95838e87bc0f2d5a9b42181a04da96e9bf8d83e935d0d2857e5\": container with ID starting with dd342a19af75e95838e87bc0f2d5a9b42181a04da96e9bf8d83e935d0d2857e5 not found: ID does not exist" containerID="dd342a19af75e95838e87bc0f2d5a9b42181a04da96e9bf8d83e935d0d2857e5" Nov 25 15:09:37 crc kubenswrapper[4731]: I1125 15:09:37.241559 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd342a19af75e95838e87bc0f2d5a9b42181a04da96e9bf8d83e935d0d2857e5"} err="failed to get container status \"dd342a19af75e95838e87bc0f2d5a9b42181a04da96e9bf8d83e935d0d2857e5\": rpc error: code = NotFound desc = could not find container \"dd342a19af75e95838e87bc0f2d5a9b42181a04da96e9bf8d83e935d0d2857e5\": container with ID starting with dd342a19af75e95838e87bc0f2d5a9b42181a04da96e9bf8d83e935d0d2857e5 not found: ID does not exist" Nov 25 15:09:37 crc kubenswrapper[4731]: I1125 15:09:37.241572 4731 scope.go:117] "RemoveContainer" containerID="4fd73e6dc044a088a741df3e28778efab2cd549d019824a9545f82b925a1a178" Nov 25 15:09:37 crc kubenswrapper[4731]: E1125 15:09:37.242215 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fd73e6dc044a088a741df3e28778efab2cd549d019824a9545f82b925a1a178\": container with ID starting with 4fd73e6dc044a088a741df3e28778efab2cd549d019824a9545f82b925a1a178 not found: ID does not exist" containerID="4fd73e6dc044a088a741df3e28778efab2cd549d019824a9545f82b925a1a178" Nov 25 15:09:37 crc kubenswrapper[4731]: I1125 15:09:37.242242 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fd73e6dc044a088a741df3e28778efab2cd549d019824a9545f82b925a1a178"} err="failed to get container status \"4fd73e6dc044a088a741df3e28778efab2cd549d019824a9545f82b925a1a178\": rpc error: code = NotFound desc = could not find container \"4fd73e6dc044a088a741df3e28778efab2cd549d019824a9545f82b925a1a178\": container with ID starting with 4fd73e6dc044a088a741df3e28778efab2cd549d019824a9545f82b925a1a178 not found: ID does not exist" Nov 25 15:09:38 crc kubenswrapper[4731]: I1125 15:09:38.823854 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="735529b8-3a25-4535-895f-5e5e8f410cc6" path="/var/lib/kubelet/pods/735529b8-3a25-4535-895f-5e5e8f410cc6/volumes" Nov 25 15:09:50 crc kubenswrapper[4731]: I1125 15:09:50.040429 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:09:50 crc kubenswrapper[4731]: I1125 15:09:50.041398 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.289841 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-vldj4"] Nov 25 15:09:54 crc kubenswrapper[4731]: E1125 15:09:54.290327 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6188cbdb-0cf1-4a01-b0fc-68262a5ffa94" containerName="registry-server" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.290341 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6188cbdb-0cf1-4a01-b0fc-68262a5ffa94" containerName="registry-server" Nov 25 15:09:54 crc kubenswrapper[4731]: E1125 15:09:54.290356 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735529b8-3a25-4535-895f-5e5e8f410cc6" containerName="extract-content" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.290362 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="735529b8-3a25-4535-895f-5e5e8f410cc6" containerName="extract-content" Nov 25 15:09:54 crc kubenswrapper[4731]: E1125 15:09:54.290374 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6188cbdb-0cf1-4a01-b0fc-68262a5ffa94" containerName="extract-content" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.290382 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6188cbdb-0cf1-4a01-b0fc-68262a5ffa94" containerName="extract-content" Nov 25 15:09:54 crc kubenswrapper[4731]: E1125 15:09:54.290395 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735529b8-3a25-4535-895f-5e5e8f410cc6" containerName="extract-utilities" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.290401 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="735529b8-3a25-4535-895f-5e5e8f410cc6" containerName="extract-utilities" Nov 25 15:09:54 crc kubenswrapper[4731]: E1125 15:09:54.290415 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735529b8-3a25-4535-895f-5e5e8f410cc6" containerName="registry-server" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.290421 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="735529b8-3a25-4535-895f-5e5e8f410cc6" containerName="registry-server" Nov 25 15:09:54 crc kubenswrapper[4731]: E1125 15:09:54.290428 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6188cbdb-0cf1-4a01-b0fc-68262a5ffa94" containerName="extract-utilities" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.290433 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6188cbdb-0cf1-4a01-b0fc-68262a5ffa94" containerName="extract-utilities" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.290575 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="735529b8-3a25-4535-895f-5e5e8f410cc6" containerName="registry-server" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.290586 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6188cbdb-0cf1-4a01-b0fc-68262a5ffa94" containerName="registry-server" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.291324 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-vldj4" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.294017 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-x5gk4" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.300479 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-8w79f"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.302294 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8w79f" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.304738 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-8vzzr" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.317496 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-vldj4"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.330645 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-8w79f"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.343574 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-r52fx"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.344963 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r52fx" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.348790 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-4c4lj" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.372709 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-hxwn7"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.374123 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hxwn7" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.390401 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-r52fx"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.392183 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-fj98g" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.405322 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-kxfnt"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.406491 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-kxfnt" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.418647 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhsp8\" (UniqueName: \"kubernetes.io/projected/f5180254-3122-4550-b259-f870f4575073-kube-api-access-rhsp8\") pod \"designate-operator-controller-manager-7d695c9b56-hxwn7\" (UID: \"f5180254-3122-4550-b259-f870f4575073\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hxwn7" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.418718 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsrmd\" (UniqueName: \"kubernetes.io/projected/92f78ba4-d5a9-43f8-bb6d-7d226c301ca4-kube-api-access-dsrmd\") pod \"cinder-operator-controller-manager-79856dc55c-8w79f\" (UID: \"92f78ba4-d5a9-43f8-bb6d-7d226c301ca4\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8w79f" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.418798 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj27v\" (UniqueName: \"kubernetes.io/projected/a706f3bc-21c7-416e-b7cb-f0886804bc08-kube-api-access-mj27v\") pod \"barbican-operator-controller-manager-86dc4d89c8-vldj4\" (UID: \"a706f3bc-21c7-416e-b7cb-f0886804bc08\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-vldj4" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.429487 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-nfp9b" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.442089 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-hxwn7"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.457271 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-kxfnt"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.496880 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-kn45z"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.497995 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-kn45z" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.499929 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-kn45z"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.519890 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj27v\" (UniqueName: \"kubernetes.io/projected/a706f3bc-21c7-416e-b7cb-f0886804bc08-kube-api-access-mj27v\") pod \"barbican-operator-controller-manager-86dc4d89c8-vldj4\" (UID: \"a706f3bc-21c7-416e-b7cb-f0886804bc08\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-vldj4" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.519961 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbmmh\" (UniqueName: \"kubernetes.io/projected/ab316c81-bc21-47b6-8dbd-bd6c26cc2769-kube-api-access-bbmmh\") pod \"heat-operator-controller-manager-774b86978c-kxfnt\" (UID: \"ab316c81-bc21-47b6-8dbd-bd6c26cc2769\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-kxfnt" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.520002 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvnzs\" (UniqueName: \"kubernetes.io/projected/8212bdfe-bb4a-4865-9ef9-ba58232188f7-kube-api-access-dvnzs\") pod \"glance-operator-controller-manager-68b95954c9-r52fx\" (UID: \"8212bdfe-bb4a-4865-9ef9-ba58232188f7\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r52fx" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.520040 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhsp8\" (UniqueName: \"kubernetes.io/projected/f5180254-3122-4550-b259-f870f4575073-kube-api-access-rhsp8\") pod \"designate-operator-controller-manager-7d695c9b56-hxwn7\" (UID: \"f5180254-3122-4550-b259-f870f4575073\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hxwn7" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.520062 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsrmd\" (UniqueName: \"kubernetes.io/projected/92f78ba4-d5a9-43f8-bb6d-7d226c301ca4-kube-api-access-dsrmd\") pod \"cinder-operator-controller-manager-79856dc55c-8w79f\" (UID: \"92f78ba4-d5a9-43f8-bb6d-7d226c301ca4\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8w79f" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.520081 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9mbg\" (UniqueName: \"kubernetes.io/projected/e4f0ff64-5199-4a19-8657-08ff58d915da-kube-api-access-n9mbg\") pod \"horizon-operator-controller-manager-68c9694994-kn45z\" (UID: \"e4f0ff64-5199-4a19-8657-08ff58d915da\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-kn45z" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.521702 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.523236 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.528064 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-2bqmq" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.528092 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.528406 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-hmgjt" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.535675 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sh9ng"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.537291 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sh9ng" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.552692 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-9q9kh" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.561867 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.579595 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhsp8\" (UniqueName: \"kubernetes.io/projected/f5180254-3122-4550-b259-f870f4575073-kube-api-access-rhsp8\") pod \"designate-operator-controller-manager-7d695c9b56-hxwn7\" (UID: \"f5180254-3122-4550-b259-f870f4575073\") " pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hxwn7" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.589916 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-h4whj"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.597287 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj27v\" (UniqueName: \"kubernetes.io/projected/a706f3bc-21c7-416e-b7cb-f0886804bc08-kube-api-access-mj27v\") pod \"barbican-operator-controller-manager-86dc4d89c8-vldj4\" (UID: \"a706f3bc-21c7-416e-b7cb-f0886804bc08\") " pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-vldj4" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.597938 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsrmd\" (UniqueName: \"kubernetes.io/projected/92f78ba4-d5a9-43f8-bb6d-7d226c301ca4-kube-api-access-dsrmd\") pod \"cinder-operator-controller-manager-79856dc55c-8w79f\" (UID: \"92f78ba4-d5a9-43f8-bb6d-7d226c301ca4\") " pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8w79f" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.604516 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-h4whj" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.616272 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-mtck5" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.626741 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-vldj4" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.628287 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9mbg\" (UniqueName: \"kubernetes.io/projected/e4f0ff64-5199-4a19-8657-08ff58d915da-kube-api-access-n9mbg\") pod \"horizon-operator-controller-manager-68c9694994-kn45z\" (UID: \"e4f0ff64-5199-4a19-8657-08ff58d915da\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-kn45z" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.628402 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbmmh\" (UniqueName: \"kubernetes.io/projected/ab316c81-bc21-47b6-8dbd-bd6c26cc2769-kube-api-access-bbmmh\") pod \"heat-operator-controller-manager-774b86978c-kxfnt\" (UID: \"ab316c81-bc21-47b6-8dbd-bd6c26cc2769\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-kxfnt" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.628459 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvnzs\" (UniqueName: \"kubernetes.io/projected/8212bdfe-bb4a-4865-9ef9-ba58232188f7-kube-api-access-dvnzs\") pod \"glance-operator-controller-manager-68b95954c9-r52fx\" (UID: \"8212bdfe-bb4a-4865-9ef9-ba58232188f7\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r52fx" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.628510 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lntz\" (UniqueName: \"kubernetes.io/projected/83e6ff4c-b0f0-4d61-8923-31c0d3f1398f-kube-api-access-5lntz\") pod \"keystone-operator-controller-manager-748dc6576f-h4whj\" (UID: \"83e6ff4c-b0f0-4d61-8923-31c0d3f1398f\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-h4whj" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.631921 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sh9ng"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.640755 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8w79f" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.651734 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-2npjw"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.652968 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-2npjw" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.656337 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-v6vp4" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.681351 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvnzs\" (UniqueName: \"kubernetes.io/projected/8212bdfe-bb4a-4865-9ef9-ba58232188f7-kube-api-access-dvnzs\") pod \"glance-operator-controller-manager-68b95954c9-r52fx\" (UID: \"8212bdfe-bb4a-4865-9ef9-ba58232188f7\") " pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r52fx" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.687648 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbmmh\" (UniqueName: \"kubernetes.io/projected/ab316c81-bc21-47b6-8dbd-bd6c26cc2769-kube-api-access-bbmmh\") pod \"heat-operator-controller-manager-774b86978c-kxfnt\" (UID: \"ab316c81-bc21-47b6-8dbd-bd6c26cc2769\") " pod="openstack-operators/heat-operator-controller-manager-774b86978c-kxfnt" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.687759 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-2npjw"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.699521 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9mbg\" (UniqueName: \"kubernetes.io/projected/e4f0ff64-5199-4a19-8657-08ff58d915da-kube-api-access-n9mbg\") pod \"horizon-operator-controller-manager-68c9694994-kn45z\" (UID: \"e4f0ff64-5199-4a19-8657-08ff58d915da\") " pod="openstack-operators/horizon-operator-controller-manager-68c9694994-kn45z" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.725393 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-7kltb"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.728060 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-7kltb" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.730139 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lntz\" (UniqueName: \"kubernetes.io/projected/83e6ff4c-b0f0-4d61-8923-31c0d3f1398f-kube-api-access-5lntz\") pod \"keystone-operator-controller-manager-748dc6576f-h4whj\" (UID: \"83e6ff4c-b0f0-4d61-8923-31c0d3f1398f\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-h4whj" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.730207 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d195c5e8-4c33-44ad-ba19-deadfc6e21ef-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-2sf58\" (UID: \"d195c5e8-4c33-44ad-ba19-deadfc6e21ef\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.730243 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2cgt\" (UniqueName: \"kubernetes.io/projected/d195c5e8-4c33-44ad-ba19-deadfc6e21ef-kube-api-access-z2cgt\") pod \"infra-operator-controller-manager-d5cc86f4b-2sf58\" (UID: \"d195c5e8-4c33-44ad-ba19-deadfc6e21ef\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.730272 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56sl7\" (UniqueName: \"kubernetes.io/projected/fa757060-f39f-4482-8fcb-41ae41ef9cb9-kube-api-access-56sl7\") pod \"ironic-operator-controller-manager-5bfcdc958c-sh9ng\" (UID: \"fa757060-f39f-4482-8fcb-41ae41ef9cb9\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sh9ng" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.730860 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hxwn7" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.735265 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-9qhkq" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.750187 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-774b86978c-kxfnt" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.772402 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lntz\" (UniqueName: \"kubernetes.io/projected/83e6ff4c-b0f0-4d61-8923-31c0d3f1398f-kube-api-access-5lntz\") pod \"keystone-operator-controller-manager-748dc6576f-h4whj\" (UID: \"83e6ff4c-b0f0-4d61-8923-31c0d3f1398f\") " pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-h4whj" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.780529 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-h4whj"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.788076 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-7kltb"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.796953 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.808265 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.815410 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-lh5v2" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.831364 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-kn45z" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.834262 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpgt9\" (UniqueName: \"kubernetes.io/projected/1af6726c-a2ca-43a1-b932-cddcd28457d1-kube-api-access-dpgt9\") pod \"manila-operator-controller-manager-58bb8d67cc-2npjw\" (UID: \"1af6726c-a2ca-43a1-b932-cddcd28457d1\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-2npjw" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.834335 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42nn8\" (UniqueName: \"kubernetes.io/projected/01518faf-8637-4852-b877-b5c8ca2c7d91-kube-api-access-42nn8\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-7kltb\" (UID: \"01518faf-8637-4852-b877-b5c8ca2c7d91\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-7kltb" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.834381 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d195c5e8-4c33-44ad-ba19-deadfc6e21ef-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-2sf58\" (UID: \"d195c5e8-4c33-44ad-ba19-deadfc6e21ef\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.834434 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2cgt\" (UniqueName: \"kubernetes.io/projected/d195c5e8-4c33-44ad-ba19-deadfc6e21ef-kube-api-access-z2cgt\") pod \"infra-operator-controller-manager-d5cc86f4b-2sf58\" (UID: \"d195c5e8-4c33-44ad-ba19-deadfc6e21ef\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.834487 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56sl7\" (UniqueName: \"kubernetes.io/projected/fa757060-f39f-4482-8fcb-41ae41ef9cb9-kube-api-access-56sl7\") pod \"ironic-operator-controller-manager-5bfcdc958c-sh9ng\" (UID: \"fa757060-f39f-4482-8fcb-41ae41ef9cb9\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sh9ng" Nov 25 15:09:54 crc kubenswrapper[4731]: E1125 15:09:54.835219 4731 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 15:09:54 crc kubenswrapper[4731]: E1125 15:09:54.841907 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d195c5e8-4c33-44ad-ba19-deadfc6e21ef-cert podName:d195c5e8-4c33-44ad-ba19-deadfc6e21ef nodeName:}" failed. No retries permitted until 2025-11-25 15:09:55.335263968 +0000 UTC m=+954.082398398 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d195c5e8-4c33-44ad-ba19-deadfc6e21ef-cert") pod "infra-operator-controller-manager-d5cc86f4b-2sf58" (UID: "d195c5e8-4c33-44ad-ba19-deadfc6e21ef") : secret "infra-operator-webhook-server-cert" not found Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.859943 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.874520 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2cgt\" (UniqueName: \"kubernetes.io/projected/d195c5e8-4c33-44ad-ba19-deadfc6e21ef-kube-api-access-z2cgt\") pod \"infra-operator-controller-manager-d5cc86f4b-2sf58\" (UID: \"d195c5e8-4c33-44ad-ba19-deadfc6e21ef\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.884238 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56sl7\" (UniqueName: \"kubernetes.io/projected/fa757060-f39f-4482-8fcb-41ae41ef9cb9-kube-api-access-56sl7\") pod \"ironic-operator-controller-manager-5bfcdc958c-sh9ng\" (UID: \"fa757060-f39f-4482-8fcb-41ae41ef9cb9\") " pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sh9ng" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.913565 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-qjrbz"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.915810 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-qjrbz" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.919729 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-9vn7p" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.928095 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sh9ng" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.937120 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpgt9\" (UniqueName: \"kubernetes.io/projected/1af6726c-a2ca-43a1-b932-cddcd28457d1-kube-api-access-dpgt9\") pod \"manila-operator-controller-manager-58bb8d67cc-2npjw\" (UID: \"1af6726c-a2ca-43a1-b932-cddcd28457d1\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-2npjw" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.937175 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42nn8\" (UniqueName: \"kubernetes.io/projected/01518faf-8637-4852-b877-b5c8ca2c7d91-kube-api-access-42nn8\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-7kltb\" (UID: \"01518faf-8637-4852-b877-b5c8ca2c7d91\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-7kltb" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.937266 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7qcn\" (UniqueName: \"kubernetes.io/projected/f5ea628e-44a7-499c-a433-14d18a9bb804-kube-api-access-w7qcn\") pod \"neutron-operator-controller-manager-7c57c8bbc4-j2wt5\" (UID: \"f5ea628e-44a7-499c-a433-14d18a9bb804\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.957997 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-qjrbz"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.964552 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r52fx" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.973902 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.975439 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.977107 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42nn8\" (UniqueName: \"kubernetes.io/projected/01518faf-8637-4852-b877-b5c8ca2c7d91-kube-api-access-42nn8\") pod \"mariadb-operator-controller-manager-cb6c4fdb7-7kltb\" (UID: \"01518faf-8637-4852-b877-b5c8ca2c7d91\") " pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-7kltb" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.977686 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpgt9\" (UniqueName: \"kubernetes.io/projected/1af6726c-a2ca-43a1-b932-cddcd28457d1-kube-api-access-dpgt9\") pod \"manila-operator-controller-manager-58bb8d67cc-2npjw\" (UID: \"1af6726c-a2ca-43a1-b932-cddcd28457d1\") " pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-2npjw" Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.992879 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql"] Nov 25 15:09:54 crc kubenswrapper[4731]: I1125 15:09:54.993121 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-hgdh5" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.013901 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wmjzm"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.015018 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wmjzm" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.018051 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-d244q" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.018743 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.020586 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.024008 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-97xgq" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.024279 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.026389 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-h4whj" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.033835 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wmjzm"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.033916 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.039906 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w7qcn\" (UniqueName: \"kubernetes.io/projected/f5ea628e-44a7-499c-a433-14d18a9bb804-kube-api-access-w7qcn\") pod \"neutron-operator-controller-manager-7c57c8bbc4-j2wt5\" (UID: \"f5ea628e-44a7-499c-a433-14d18a9bb804\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.040031 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl7q4\" (UniqueName: \"kubernetes.io/projected/39362f78-55ff-4380-80b3-c46e2d004749-kube-api-access-xl7q4\") pod \"nova-operator-controller-manager-79556f57fc-qjrbz\" (UID: \"39362f78-55ff-4380-80b3-c46e2d004749\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-qjrbz" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.043580 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.044376 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-2npjw" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.047260 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.053325 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-d8bzl" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.062836 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.069452 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.071376 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.074846 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-qd6s4" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.074861 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w7qcn\" (UniqueName: \"kubernetes.io/projected/f5ea628e-44a7-499c-a433-14d18a9bb804-kube-api-access-w7qcn\") pod \"neutron-operator-controller-manager-7c57c8bbc4-j2wt5\" (UID: \"f5ea628e-44a7-499c-a433-14d18a9bb804\") " pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.077962 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-7kltb" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.100041 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.101505 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.110336 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-w5bcm" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.111676 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.135430 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-9w7kw"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.142145 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.143363 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1bdef4b3-96db-42a5-aefd-2641cced3d1e-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-kctp7\" (UID: \"1bdef4b3-96db-42a5-aefd-2641cced3d1e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.143420 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xl7q4\" (UniqueName: \"kubernetes.io/projected/39362f78-55ff-4380-80b3-c46e2d004749-kube-api-access-xl7q4\") pod \"nova-operator-controller-manager-79556f57fc-qjrbz\" (UID: \"39362f78-55ff-4380-80b3-c46e2d004749\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-qjrbz" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.143467 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvskj\" (UniqueName: \"kubernetes.io/projected/77bbfdb5-d5b4-4512-ac18-e3d787eea6f2-kube-api-access-wvskj\") pod \"ovn-operator-controller-manager-66cf5c67ff-wmjzm\" (UID: \"77bbfdb5-d5b4-4512-ac18-e3d787eea6f2\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wmjzm" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.143522 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6bkp\" (UniqueName: \"kubernetes.io/projected/1bdef4b3-96db-42a5-aefd-2641cced3d1e-kube-api-access-k6bkp\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-kctp7\" (UID: \"1bdef4b3-96db-42a5-aefd-2641cced3d1e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.143553 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qt5m\" (UniqueName: \"kubernetes.io/projected/1eedb0ec-4f2e-43da-8e15-aae0436e942e-kube-api-access-9qt5m\") pod \"octavia-operator-controller-manager-fd75fd47d-bdwql\" (UID: \"1eedb0ec-4f2e-43da-8e15-aae0436e942e\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.150767 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-9w7kw" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.162026 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.170196 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-dzzb7" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.194978 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-9w7kw"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.199939 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xl7q4\" (UniqueName: \"kubernetes.io/projected/39362f78-55ff-4380-80b3-c46e2d004749-kube-api-access-xl7q4\") pod \"nova-operator-controller-manager-79556f57fc-qjrbz\" (UID: \"39362f78-55ff-4380-80b3-c46e2d004749\") " pod="openstack-operators/nova-operator-controller-manager-79556f57fc-qjrbz" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.244258 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-sk2wl"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.245924 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-sk2wl" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.246607 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-sk2wl"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.248885 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1bdef4b3-96db-42a5-aefd-2641cced3d1e-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-kctp7\" (UID: \"1bdef4b3-96db-42a5-aefd-2641cced3d1e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.248962 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmrbw\" (UniqueName: \"kubernetes.io/projected/61221a67-95aa-4e96-a41c-4d9343305157-kube-api-access-nmrbw\") pod \"placement-operator-controller-manager-5db546f9d9-8m7t9\" (UID: \"61221a67-95aa-4e96-a41c-4d9343305157\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.249027 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvskj\" (UniqueName: \"kubernetes.io/projected/77bbfdb5-d5b4-4512-ac18-e3d787eea6f2-kube-api-access-wvskj\") pod \"ovn-operator-controller-manager-66cf5c67ff-wmjzm\" (UID: \"77bbfdb5-d5b4-4512-ac18-e3d787eea6f2\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wmjzm" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.249071 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmch8\" (UniqueName: \"kubernetes.io/projected/bc21926a-c3cf-49fa-addf-01d3156c1b5f-kube-api-access-rmch8\") pod \"swift-operator-controller-manager-6fdc4fcf86-ft6fb\" (UID: \"bc21926a-c3cf-49fa-addf-01d3156c1b5f\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.249136 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qt5m\" (UniqueName: \"kubernetes.io/projected/1eedb0ec-4f2e-43da-8e15-aae0436e942e-kube-api-access-9qt5m\") pod \"octavia-operator-controller-manager-fd75fd47d-bdwql\" (UID: \"1eedb0ec-4f2e-43da-8e15-aae0436e942e\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.249165 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6bkp\" (UniqueName: \"kubernetes.io/projected/1bdef4b3-96db-42a5-aefd-2641cced3d1e-kube-api-access-k6bkp\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-kctp7\" (UID: \"1bdef4b3-96db-42a5-aefd-2641cced3d1e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.249207 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgs4j\" (UniqueName: \"kubernetes.io/projected/089e8703-f71e-4f81-91af-19224ba7b1b4-kube-api-access-qgs4j\") pod \"telemetry-operator-controller-manager-75cd8d4b6c-kkqgj\" (UID: \"089e8703-f71e-4f81-91af-19224ba7b1b4\") " pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" Nov 25 15:09:55 crc kubenswrapper[4731]: E1125 15:09:55.249378 4731 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 15:09:55 crc kubenswrapper[4731]: E1125 15:09:55.249445 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bdef4b3-96db-42a5-aefd-2641cced3d1e-cert podName:1bdef4b3-96db-42a5-aefd-2641cced3d1e nodeName:}" failed. No retries permitted until 2025-11-25 15:09:55.74942056 +0000 UTC m=+954.496554990 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1bdef4b3-96db-42a5-aefd-2641cced3d1e-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" (UID: "1bdef4b3-96db-42a5-aefd-2641cced3d1e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.250728 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-8q4ng" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.273443 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-qjrbz" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.281899 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.282741 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qt5m\" (UniqueName: \"kubernetes.io/projected/1eedb0ec-4f2e-43da-8e15-aae0436e942e-kube-api-access-9qt5m\") pod \"octavia-operator-controller-manager-fd75fd47d-bdwql\" (UID: \"1eedb0ec-4f2e-43da-8e15-aae0436e942e\") " pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.283440 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.288118 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.288422 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-c8glc" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.288255 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.296806 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6bkp\" (UniqueName: \"kubernetes.io/projected/1bdef4b3-96db-42a5-aefd-2641cced3d1e-kube-api-access-k6bkp\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-kctp7\" (UID: \"1bdef4b3-96db-42a5-aefd-2641cced3d1e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.297446 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvskj\" (UniqueName: \"kubernetes.io/projected/77bbfdb5-d5b4-4512-ac18-e3d787eea6f2-kube-api-access-wvskj\") pod \"ovn-operator-controller-manager-66cf5c67ff-wmjzm\" (UID: \"77bbfdb5-d5b4-4512-ac18-e3d787eea6f2\") " pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wmjzm" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.304140 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.358932 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmrbw\" (UniqueName: \"kubernetes.io/projected/61221a67-95aa-4e96-a41c-4d9343305157-kube-api-access-nmrbw\") pod \"placement-operator-controller-manager-5db546f9d9-8m7t9\" (UID: \"61221a67-95aa-4e96-a41c-4d9343305157\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.359058 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2zjx\" (UniqueName: \"kubernetes.io/projected/158dbcc7-a9af-464b-9886-4f5f6f4bbf71-kube-api-access-g2zjx\") pod \"watcher-operator-controller-manager-864885998-sk2wl\" (UID: \"158dbcc7-a9af-464b-9886-4f5f6f4bbf71\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-sk2wl" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.359107 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5lr7\" (UniqueName: \"kubernetes.io/projected/54f47a6e-c9be-49a3-89af-43ecbd6b7b93-kube-api-access-j5lr7\") pod \"test-operator-controller-manager-5cb74df96-9w7kw\" (UID: \"54f47a6e-c9be-49a3-89af-43ecbd6b7b93\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-9w7kw" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.365599 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmch8\" (UniqueName: \"kubernetes.io/projected/bc21926a-c3cf-49fa-addf-01d3156c1b5f-kube-api-access-rmch8\") pod \"swift-operator-controller-manager-6fdc4fcf86-ft6fb\" (UID: \"bc21926a-c3cf-49fa-addf-01d3156c1b5f\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.365744 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgs4j\" (UniqueName: \"kubernetes.io/projected/089e8703-f71e-4f81-91af-19224ba7b1b4-kube-api-access-qgs4j\") pod \"telemetry-operator-controller-manager-75cd8d4b6c-kkqgj\" (UID: \"089e8703-f71e-4f81-91af-19224ba7b1b4\") " pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.365903 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d195c5e8-4c33-44ad-ba19-deadfc6e21ef-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-2sf58\" (UID: \"d195c5e8-4c33-44ad-ba19-deadfc6e21ef\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" Nov 25 15:09:55 crc kubenswrapper[4731]: E1125 15:09:55.366130 4731 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Nov 25 15:09:55 crc kubenswrapper[4731]: E1125 15:09:55.366212 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d195c5e8-4c33-44ad-ba19-deadfc6e21ef-cert podName:d195c5e8-4c33-44ad-ba19-deadfc6e21ef nodeName:}" failed. No retries permitted until 2025-11-25 15:09:56.366180985 +0000 UTC m=+955.113315415 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d195c5e8-4c33-44ad-ba19-deadfc6e21ef-cert") pod "infra-operator-controller-manager-d5cc86f4b-2sf58" (UID: "d195c5e8-4c33-44ad-ba19-deadfc6e21ef") : secret "infra-operator-webhook-server-cert" not found Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.369375 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6tddn"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.382076 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6tddn" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.386231 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-c8xw9" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.398287 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.399775 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6tddn"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.405043 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmch8\" (UniqueName: \"kubernetes.io/projected/bc21926a-c3cf-49fa-addf-01d3156c1b5f-kube-api-access-rmch8\") pod \"swift-operator-controller-manager-6fdc4fcf86-ft6fb\" (UID: \"bc21926a-c3cf-49fa-addf-01d3156c1b5f\") " pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.411994 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmrbw\" (UniqueName: \"kubernetes.io/projected/61221a67-95aa-4e96-a41c-4d9343305157-kube-api-access-nmrbw\") pod \"placement-operator-controller-manager-5db546f9d9-8m7t9\" (UID: \"61221a67-95aa-4e96-a41c-4d9343305157\") " pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.412417 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wmjzm" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.436093 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgs4j\" (UniqueName: \"kubernetes.io/projected/089e8703-f71e-4f81-91af-19224ba7b1b4-kube-api-access-qgs4j\") pod \"telemetry-operator-controller-manager-75cd8d4b6c-kkqgj\" (UID: \"089e8703-f71e-4f81-91af-19224ba7b1b4\") " pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.468251 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-webhook-certs\") pod \"openstack-operator-controller-manager-77d84b6b6b-mdwdb\" (UID: \"7d6095a8-a478-472d-b84b-c99877ddcdf1\") " pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.468799 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvm2d\" (UniqueName: \"kubernetes.io/projected/2aeac7ab-78a9-49f2-9640-0c91b73bbe57-kube-api-access-nvm2d\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6tddn\" (UID: \"2aeac7ab-78a9-49f2-9640-0c91b73bbe57\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6tddn" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.468978 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-metrics-certs\") pod \"openstack-operator-controller-manager-77d84b6b6b-mdwdb\" (UID: \"7d6095a8-a478-472d-b84b-c99877ddcdf1\") " pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.469059 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2zjx\" (UniqueName: \"kubernetes.io/projected/158dbcc7-a9af-464b-9886-4f5f6f4bbf71-kube-api-access-g2zjx\") pod \"watcher-operator-controller-manager-864885998-sk2wl\" (UID: \"158dbcc7-a9af-464b-9886-4f5f6f4bbf71\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-sk2wl" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.469112 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5lr7\" (UniqueName: \"kubernetes.io/projected/54f47a6e-c9be-49a3-89af-43ecbd6b7b93-kube-api-access-j5lr7\") pod \"test-operator-controller-manager-5cb74df96-9w7kw\" (UID: \"54f47a6e-c9be-49a3-89af-43ecbd6b7b93\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-9w7kw" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.469160 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmwqw\" (UniqueName: \"kubernetes.io/projected/7d6095a8-a478-472d-b84b-c99877ddcdf1-kube-api-access-hmwqw\") pod \"openstack-operator-controller-manager-77d84b6b6b-mdwdb\" (UID: \"7d6095a8-a478-472d-b84b-c99877ddcdf1\") " pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.494289 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5lr7\" (UniqueName: \"kubernetes.io/projected/54f47a6e-c9be-49a3-89af-43ecbd6b7b93-kube-api-access-j5lr7\") pod \"test-operator-controller-manager-5cb74df96-9w7kw\" (UID: \"54f47a6e-c9be-49a3-89af-43ecbd6b7b93\") " pod="openstack-operators/test-operator-controller-manager-5cb74df96-9w7kw" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.496641 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.497271 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2zjx\" (UniqueName: \"kubernetes.io/projected/158dbcc7-a9af-464b-9886-4f5f6f4bbf71-kube-api-access-g2zjx\") pod \"watcher-operator-controller-manager-864885998-sk2wl\" (UID: \"158dbcc7-a9af-464b-9886-4f5f6f4bbf71\") " pod="openstack-operators/watcher-operator-controller-manager-864885998-sk2wl" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.508274 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-864885998-sk2wl" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.516155 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.548444 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.578256 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-webhook-certs\") pod \"openstack-operator-controller-manager-77d84b6b6b-mdwdb\" (UID: \"7d6095a8-a478-472d-b84b-c99877ddcdf1\") " pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.579172 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvm2d\" (UniqueName: \"kubernetes.io/projected/2aeac7ab-78a9-49f2-9640-0c91b73bbe57-kube-api-access-nvm2d\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6tddn\" (UID: \"2aeac7ab-78a9-49f2-9640-0c91b73bbe57\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6tddn" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.579235 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-metrics-certs\") pod \"openstack-operator-controller-manager-77d84b6b6b-mdwdb\" (UID: \"7d6095a8-a478-472d-b84b-c99877ddcdf1\") " pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.579310 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmwqw\" (UniqueName: \"kubernetes.io/projected/7d6095a8-a478-472d-b84b-c99877ddcdf1-kube-api-access-hmwqw\") pod \"openstack-operator-controller-manager-77d84b6b6b-mdwdb\" (UID: \"7d6095a8-a478-472d-b84b-c99877ddcdf1\") " pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:09:55 crc kubenswrapper[4731]: E1125 15:09:55.579875 4731 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 15:09:55 crc kubenswrapper[4731]: E1125 15:09:55.579956 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-webhook-certs podName:7d6095a8-a478-472d-b84b-c99877ddcdf1 nodeName:}" failed. No retries permitted until 2025-11-25 15:09:56.079931839 +0000 UTC m=+954.827066269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-webhook-certs") pod "openstack-operator-controller-manager-77d84b6b6b-mdwdb" (UID: "7d6095a8-a478-472d-b84b-c99877ddcdf1") : secret "webhook-server-cert" not found Nov 25 15:09:55 crc kubenswrapper[4731]: E1125 15:09:55.580192 4731 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 15:09:55 crc kubenswrapper[4731]: E1125 15:09:55.580228 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-metrics-certs podName:7d6095a8-a478-472d-b84b-c99877ddcdf1 nodeName:}" failed. No retries permitted until 2025-11-25 15:09:56.080218187 +0000 UTC m=+954.827352617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-metrics-certs") pod "openstack-operator-controller-manager-77d84b6b6b-mdwdb" (UID: "7d6095a8-a478-472d-b84b-c99877ddcdf1") : secret "metrics-server-cert" not found Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.613601 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvm2d\" (UniqueName: \"kubernetes.io/projected/2aeac7ab-78a9-49f2-9640-0c91b73bbe57-kube-api-access-nvm2d\") pod \"rabbitmq-cluster-operator-manager-668c99d594-6tddn\" (UID: \"2aeac7ab-78a9-49f2-9640-0c91b73bbe57\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6tddn" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.621656 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmwqw\" (UniqueName: \"kubernetes.io/projected/7d6095a8-a478-472d-b84b-c99877ddcdf1-kube-api-access-hmwqw\") pod \"openstack-operator-controller-manager-77d84b6b6b-mdwdb\" (UID: \"7d6095a8-a478-472d-b84b-c99877ddcdf1\") " pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.628370 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5cb74df96-9w7kw" Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.758110 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-86dc4d89c8-vldj4"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.764937 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-79856dc55c-8w79f"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.772362 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-7d695c9b56-hxwn7"] Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.783208 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1bdef4b3-96db-42a5-aefd-2641cced3d1e-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-kctp7\" (UID: \"1bdef4b3-96db-42a5-aefd-2641cced3d1e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" Nov 25 15:09:55 crc kubenswrapper[4731]: E1125 15:09:55.783454 4731 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 15:09:55 crc kubenswrapper[4731]: E1125 15:09:55.783525 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bdef4b3-96db-42a5-aefd-2641cced3d1e-cert podName:1bdef4b3-96db-42a5-aefd-2641cced3d1e nodeName:}" failed. No retries permitted until 2025-11-25 15:09:56.783500534 +0000 UTC m=+955.530634984 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1bdef4b3-96db-42a5-aefd-2641cced3d1e-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" (UID: "1bdef4b3-96db-42a5-aefd-2641cced3d1e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 15:09:55 crc kubenswrapper[4731]: I1125 15:09:55.880558 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6tddn" Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.087618 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-webhook-certs\") pod \"openstack-operator-controller-manager-77d84b6b6b-mdwdb\" (UID: \"7d6095a8-a478-472d-b84b-c99877ddcdf1\") " pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.088129 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-metrics-certs\") pod \"openstack-operator-controller-manager-77d84b6b6b-mdwdb\" (UID: \"7d6095a8-a478-472d-b84b-c99877ddcdf1\") " pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.087856 4731 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.088254 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-webhook-certs podName:7d6095a8-a478-472d-b84b-c99877ddcdf1 nodeName:}" failed. No retries permitted until 2025-11-25 15:09:57.08822469 +0000 UTC m=+955.835359120 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-webhook-certs") pod "openstack-operator-controller-manager-77d84b6b6b-mdwdb" (UID: "7d6095a8-a478-472d-b84b-c99877ddcdf1") : secret "webhook-server-cert" not found Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.088338 4731 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.088427 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-metrics-certs podName:7d6095a8-a478-472d-b84b-c99877ddcdf1 nodeName:}" failed. No retries permitted until 2025-11-25 15:09:57.088403375 +0000 UTC m=+955.835537995 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-metrics-certs") pod "openstack-operator-controller-manager-77d84b6b6b-mdwdb" (UID: "7d6095a8-a478-472d-b84b-c99877ddcdf1") : secret "metrics-server-cert" not found Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.171145 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-68b95954c9-r52fx"] Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.185509 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-7kltb"] Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.214957 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c9694994-kn45z"] Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.219872 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-774b86978c-kxfnt"] Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.226206 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-58bb8d67cc-2npjw"] Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.232378 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-748dc6576f-h4whj"] Nov 25 15:09:56 crc kubenswrapper[4731]: W1125 15:09:56.234389 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1af6726c_a2ca_43a1_b932_cddcd28457d1.slice/crio-e00a56cc5a750779a0be0200c252a04fe72e4380fcd05ae4cd846c4833b7b83e WatchSource:0}: Error finding container e00a56cc5a750779a0be0200c252a04fe72e4380fcd05ae4cd846c4833b7b83e: Status 404 returned error can't find the container with id e00a56cc5a750779a0be0200c252a04fe72e4380fcd05ae4cd846c4833b7b83e Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.237829 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sh9ng"] Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.339536 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-2npjw" event={"ID":"1af6726c-a2ca-43a1-b932-cddcd28457d1","Type":"ContainerStarted","Data":"e00a56cc5a750779a0be0200c252a04fe72e4380fcd05ae4cd846c4833b7b83e"} Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.344976 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r52fx" event={"ID":"8212bdfe-bb4a-4865-9ef9-ba58232188f7","Type":"ContainerStarted","Data":"6a82b746bcc49b8905bfa9a3e7223158f54cec809acbe2dcbf74868fdd726ecc"} Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.349279 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-vldj4" event={"ID":"a706f3bc-21c7-416e-b7cb-f0886804bc08","Type":"ContainerStarted","Data":"09cf4d0c1f2d69892d205dbce9e7dd98231d949b93b5234996bb4d3701fee45e"} Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.351368 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8w79f" event={"ID":"92f78ba4-d5a9-43f8-bb6d-7d226c301ca4","Type":"ContainerStarted","Data":"431b2b3d496f18d12c383a7efaef61ede06ac8db5b47f4802b57d1a77880199f"} Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.353283 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hxwn7" event={"ID":"f5180254-3122-4550-b259-f870f4575073","Type":"ContainerStarted","Data":"6b43777785edd7b472dfe363afacd8f56cd9857af02f8fc4b5f133c41454d3fc"} Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.354856 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-7kltb" event={"ID":"01518faf-8637-4852-b877-b5c8ca2c7d91","Type":"ContainerStarted","Data":"a38fcb534ee5484866ca0f46502f157086f22df4ba6e559ed378f7dd6dea7b51"} Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.357068 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-kxfnt" event={"ID":"ab316c81-bc21-47b6-8dbd-bd6c26cc2769","Type":"ContainerStarted","Data":"2c0e648aef5a801c6f68b94bd86154071a444aacb2509ada3a87c48a796c71c5"} Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.357955 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-h4whj" event={"ID":"83e6ff4c-b0f0-4d61-8923-31c0d3f1398f","Type":"ContainerStarted","Data":"2ffcd0576659775ca0a0ac24d654d802678bfaab84517d3fb768022ed03459f8"} Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.358998 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sh9ng" event={"ID":"fa757060-f39f-4482-8fcb-41ae41ef9cb9","Type":"ContainerStarted","Data":"e13f10a9e7556d683e3848ac4fc1aa59e9537b1a90c70f95d071e1c9f9275d69"} Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.359704 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-kn45z" event={"ID":"e4f0ff64-5199-4a19-8657-08ff58d915da","Type":"ContainerStarted","Data":"1de882dca2fea0702e330a804f87447e63444171c926b4f7e8de36e33d3d3508"} Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.399553 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d195c5e8-4c33-44ad-ba19-deadfc6e21ef-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-2sf58\" (UID: \"d195c5e8-4c33-44ad-ba19-deadfc6e21ef\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.406752 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d195c5e8-4c33-44ad-ba19-deadfc6e21ef-cert\") pod \"infra-operator-controller-manager-d5cc86f4b-2sf58\" (UID: \"d195c5e8-4c33-44ad-ba19-deadfc6e21ef\") " pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.526536 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79556f57fc-qjrbz"] Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.538013 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wmjzm"] Nov 25 15:09:56 crc kubenswrapper[4731]: W1125 15:09:56.545191 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39362f78_55ff_4380_80b3_c46e2d004749.slice/crio-4819bad8bb3079ec1543fe84dadae69a9a6786c32f65c1a33ba46d749818c850 WatchSource:0}: Error finding container 4819bad8bb3079ec1543fe84dadae69a9a6786c32f65c1a33ba46d749818c850: Status 404 returned error can't find the container with id 4819bad8bb3079ec1543fe84dadae69a9a6786c32f65c1a33ba46d749818c850 Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.547010 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql"] Nov 25 15:09:56 crc kubenswrapper[4731]: W1125 15:09:56.549314 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77bbfdb5_d5b4_4512_ac18_e3d787eea6f2.slice/crio-08fdf19c40af25193fcc3a741071b723806bd2c068dacc25bc40979eba9b9dc0 WatchSource:0}: Error finding container 08fdf19c40af25193fcc3a741071b723806bd2c068dacc25bc40979eba9b9dc0: Status 404 returned error can't find the container with id 08fdf19c40af25193fcc3a741071b723806bd2c068dacc25bc40979eba9b9dc0 Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.566643 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb"] Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.580597 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5cb74df96-9w7kw"] Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.582577 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nmrbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-8m7t9_openstack-operators(61221a67-95aa-4e96-a41c-4d9343305157): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.585007 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nmrbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-8m7t9_openstack-operators(61221a67-95aa-4e96-a41c-4d9343305157): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.586577 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" podUID="61221a67-95aa-4e96-a41c-4d9343305157" Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.592033 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-864885998-sk2wl"] Nov 25 15:09:56 crc kubenswrapper[4731]: W1125 15:09:56.593022 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc21926a_c3cf_49fa_addf_01d3156c1b5f.slice/crio-f76576ebde7fd8988d2af7f5c6a9afa7159b6c429c587103c15e49f049e9714f WatchSource:0}: Error finding container f76576ebde7fd8988d2af7f5c6a9afa7159b6c429c587103c15e49f049e9714f: Status 404 returned error can't find the container with id f76576ebde7fd8988d2af7f5c6a9afa7159b6c429c587103c15e49f049e9714f Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.600725 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rmch8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-ft6fb_openstack-operators(bc21926a-c3cf-49fa-addf-01d3156c1b5f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.601465 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9qt5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-bdwql_openstack-operators(1eedb0ec-4f2e-43da-8e15-aae0436e942e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.601558 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5"] Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.605325 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9qt5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-fd75fd47d-bdwql_openstack-operators(1eedb0ec-4f2e-43da-8e15-aae0436e942e): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.605807 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rmch8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-ft6fb_openstack-operators(bc21926a-c3cf-49fa-addf-01d3156c1b5f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.606503 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql" podUID="1eedb0ec-4f2e-43da-8e15-aae0436e942e" Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.607616 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" podUID="bc21926a-c3cf-49fa-addf-01d3156c1b5f" Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.609123 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9"] Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.613954 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g2zjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-sk2wl_openstack-operators(158dbcc7-a9af-464b-9886-4f5f6f4bbf71): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.616901 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w7qcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-j2wt5_openstack-operators(f5ea628e-44a7-499c-a433-14d18a9bb804): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.621296 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w7qcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-j2wt5_openstack-operators(f5ea628e-44a7-499c-a433-14d18a9bb804): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.622388 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" podUID="f5ea628e-44a7-499c-a433-14d18a9bb804" Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.634406 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj"] Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.661934 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.665292 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6tddn"] Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.674128 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.107:5001/openstack-k8s-operators/telemetry-operator:dbb3429c35dcb61ad9e85aef0b186127dd7e203f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qgs4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-75cd8d4b6c-kkqgj_openstack-operators(089e8703-f71e-4f81-91af-19224ba7b1b4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.675943 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qgs4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-75cd8d4b6c-kkqgj_openstack-operators(089e8703-f71e-4f81-91af-19224ba7b1b4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.677088 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" podUID="089e8703-f71e-4f81-91af-19224ba7b1b4" Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.715855 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nvm2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-6tddn_openstack-operators(2aeac7ab-78a9-49f2-9640-0c91b73bbe57): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.718198 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6tddn" podUID="2aeac7ab-78a9-49f2-9640-0c91b73bbe57" Nov 25 15:09:56 crc kubenswrapper[4731]: I1125 15:09:56.842543 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1bdef4b3-96db-42a5-aefd-2641cced3d1e-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-kctp7\" (UID: \"1bdef4b3-96db-42a5-aefd-2641cced3d1e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.842985 4731 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 15:09:56 crc kubenswrapper[4731]: E1125 15:09:56.843049 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1bdef4b3-96db-42a5-aefd-2641cced3d1e-cert podName:1bdef4b3-96db-42a5-aefd-2641cced3d1e nodeName:}" failed. No retries permitted until 2025-11-25 15:09:58.84303158 +0000 UTC m=+957.590166010 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/1bdef4b3-96db-42a5-aefd-2641cced3d1e-cert") pod "openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" (UID: "1bdef4b3-96db-42a5-aefd-2641cced3d1e") : secret "openstack-baremetal-operator-webhook-server-cert" not found Nov 25 15:09:57 crc kubenswrapper[4731]: I1125 15:09:57.151468 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-webhook-certs\") pod \"openstack-operator-controller-manager-77d84b6b6b-mdwdb\" (UID: \"7d6095a8-a478-472d-b84b-c99877ddcdf1\") " pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:09:57 crc kubenswrapper[4731]: I1125 15:09:57.151564 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-metrics-certs\") pod \"openstack-operator-controller-manager-77d84b6b6b-mdwdb\" (UID: \"7d6095a8-a478-472d-b84b-c99877ddcdf1\") " pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:09:57 crc kubenswrapper[4731]: E1125 15:09:57.152350 4731 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Nov 25 15:09:57 crc kubenswrapper[4731]: E1125 15:09:57.152423 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-metrics-certs podName:7d6095a8-a478-472d-b84b-c99877ddcdf1 nodeName:}" failed. No retries permitted until 2025-11-25 15:09:59.152402211 +0000 UTC m=+957.899536641 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-metrics-certs") pod "openstack-operator-controller-manager-77d84b6b6b-mdwdb" (UID: "7d6095a8-a478-472d-b84b-c99877ddcdf1") : secret "metrics-server-cert" not found Nov 25 15:09:57 crc kubenswrapper[4731]: E1125 15:09:57.153713 4731 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Nov 25 15:09:57 crc kubenswrapper[4731]: E1125 15:09:57.153756 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-webhook-certs podName:7d6095a8-a478-472d-b84b-c99877ddcdf1 nodeName:}" failed. No retries permitted until 2025-11-25 15:09:59.153744384 +0000 UTC m=+957.900878814 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-webhook-certs") pod "openstack-operator-controller-manager-77d84b6b6b-mdwdb" (UID: "7d6095a8-a478-472d-b84b-c99877ddcdf1") : secret "webhook-server-cert" not found Nov 25 15:09:57 crc kubenswrapper[4731]: I1125 15:09:57.205860 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58"] Nov 25 15:09:57 crc kubenswrapper[4731]: W1125 15:09:57.234325 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd195c5e8_4c33_44ad_ba19_deadfc6e21ef.slice/crio-427fb4e80129d134196ed37954488afb5b169336aa99877c532684de1452845d WatchSource:0}: Error finding container 427fb4e80129d134196ed37954488afb5b169336aa99877c532684de1452845d: Status 404 returned error can't find the container with id 427fb4e80129d134196ed37954488afb5b169336aa99877c532684de1452845d Nov 25 15:09:57 crc kubenswrapper[4731]: I1125 15:09:57.372592 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" event={"ID":"bc21926a-c3cf-49fa-addf-01d3156c1b5f","Type":"ContainerStarted","Data":"f76576ebde7fd8988d2af7f5c6a9afa7159b6c429c587103c15e49f049e9714f"} Nov 25 15:09:57 crc kubenswrapper[4731]: E1125 15:09:57.375089 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" podUID="bc21926a-c3cf-49fa-addf-01d3156c1b5f" Nov 25 15:09:57 crc kubenswrapper[4731]: I1125 15:09:57.375525 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" event={"ID":"d195c5e8-4c33-44ad-ba19-deadfc6e21ef","Type":"ContainerStarted","Data":"427fb4e80129d134196ed37954488afb5b169336aa99877c532684de1452845d"} Nov 25 15:09:57 crc kubenswrapper[4731]: I1125 15:09:57.385501 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" event={"ID":"089e8703-f71e-4f81-91af-19224ba7b1b4","Type":"ContainerStarted","Data":"ce2df5e85cf42a631c0b5e3d72c25f66470187d6902c76f309f1ec652ec47ee9"} Nov 25 15:09:57 crc kubenswrapper[4731]: I1125 15:09:57.388996 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-qjrbz" event={"ID":"39362f78-55ff-4380-80b3-c46e2d004749","Type":"ContainerStarted","Data":"4819bad8bb3079ec1543fe84dadae69a9a6786c32f65c1a33ba46d749818c850"} Nov 25 15:09:57 crc kubenswrapper[4731]: E1125 15:09:57.392982 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.107:5001/openstack-k8s-operators/telemetry-operator:dbb3429c35dcb61ad9e85aef0b186127dd7e203f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" podUID="089e8703-f71e-4f81-91af-19224ba7b1b4" Nov 25 15:09:57 crc kubenswrapper[4731]: I1125 15:09:57.400797 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6tddn" event={"ID":"2aeac7ab-78a9-49f2-9640-0c91b73bbe57","Type":"ContainerStarted","Data":"5fc5aa5ee3feb0713604cfc0feeaf88ed818cf7f0d167780b2dc379a7d502a65"} Nov 25 15:09:57 crc kubenswrapper[4731]: E1125 15:09:57.403671 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6tddn" podUID="2aeac7ab-78a9-49f2-9640-0c91b73bbe57" Nov 25 15:09:57 crc kubenswrapper[4731]: I1125 15:09:57.403749 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-sk2wl" event={"ID":"158dbcc7-a9af-464b-9886-4f5f6f4bbf71","Type":"ContainerStarted","Data":"3916f26eb1957b68ed161ac9adfa77bc24b8a6eb32535acdf12b083b47e70be8"} Nov 25 15:09:57 crc kubenswrapper[4731]: I1125 15:09:57.417253 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" event={"ID":"61221a67-95aa-4e96-a41c-4d9343305157","Type":"ContainerStarted","Data":"1d0faf8cbee472cf9c5bc05a915dfc6af482429f28a2ecc2b8e49723174a59c7"} Nov 25 15:09:57 crc kubenswrapper[4731]: E1125 15:09:57.427070 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" podUID="61221a67-95aa-4e96-a41c-4d9343305157" Nov 25 15:09:57 crc kubenswrapper[4731]: I1125 15:09:57.431041 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql" event={"ID":"1eedb0ec-4f2e-43da-8e15-aae0436e942e","Type":"ContainerStarted","Data":"0e9d5f2f8ab605a40f963fc868eb8535848f47de9bd8e8eeca010c8a50f579ea"} Nov 25 15:09:57 crc kubenswrapper[4731]: E1125 15:09:57.436139 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql" podUID="1eedb0ec-4f2e-43da-8e15-aae0436e942e" Nov 25 15:09:57 crc kubenswrapper[4731]: I1125 15:09:57.439677 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" event={"ID":"f5ea628e-44a7-499c-a433-14d18a9bb804","Type":"ContainerStarted","Data":"09f010c9a666619f0c0721f5e1d2dacf721f4637f1b3e7ba43faf84be55703ee"} Nov 25 15:09:57 crc kubenswrapper[4731]: I1125 15:09:57.444329 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-9w7kw" event={"ID":"54f47a6e-c9be-49a3-89af-43ecbd6b7b93","Type":"ContainerStarted","Data":"9cb2ebb21b1533fad39514613c3c0e2e7961b22898ab96e0c8891550b7bd153e"} Nov 25 15:09:57 crc kubenswrapper[4731]: E1125 15:09:57.444739 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" podUID="f5ea628e-44a7-499c-a433-14d18a9bb804" Nov 25 15:09:57 crc kubenswrapper[4731]: I1125 15:09:57.452745 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wmjzm" event={"ID":"77bbfdb5-d5b4-4512-ac18-e3d787eea6f2","Type":"ContainerStarted","Data":"08fdf19c40af25193fcc3a741071b723806bd2c068dacc25bc40979eba9b9dc0"} Nov 25 15:09:58 crc kubenswrapper[4731]: E1125 15:09:58.472132 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6tddn" podUID="2aeac7ab-78a9-49f2-9640-0c91b73bbe57" Nov 25 15:09:58 crc kubenswrapper[4731]: E1125 15:09:58.473030 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" podUID="bc21926a-c3cf-49fa-addf-01d3156c1b5f" Nov 25 15:09:58 crc kubenswrapper[4731]: E1125 15:09:58.473199 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.107:5001/openstack-k8s-operators/telemetry-operator:dbb3429c35dcb61ad9e85aef0b186127dd7e203f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" podUID="089e8703-f71e-4f81-91af-19224ba7b1b4" Nov 25 15:09:58 crc kubenswrapper[4731]: E1125 15:09:58.473410 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" podUID="61221a67-95aa-4e96-a41c-4d9343305157" Nov 25 15:09:58 crc kubenswrapper[4731]: E1125 15:09:58.482622 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:442c269d79163f8da75505019c02e9f0815837aaadcaddacb8e6c12df297ca13\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql" podUID="1eedb0ec-4f2e-43da-8e15-aae0436e942e" Nov 25 15:09:58 crc kubenswrapper[4731]: E1125 15:09:58.487193 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" podUID="f5ea628e-44a7-499c-a433-14d18a9bb804" Nov 25 15:09:58 crc kubenswrapper[4731]: I1125 15:09:58.887183 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1bdef4b3-96db-42a5-aefd-2641cced3d1e-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-kctp7\" (UID: \"1bdef4b3-96db-42a5-aefd-2641cced3d1e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" Nov 25 15:09:58 crc kubenswrapper[4731]: I1125 15:09:58.913614 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1bdef4b3-96db-42a5-aefd-2641cced3d1e-cert\") pod \"openstack-baremetal-operator-controller-manager-544b9bb9-kctp7\" (UID: \"1bdef4b3-96db-42a5-aefd-2641cced3d1e\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" Nov 25 15:09:59 crc kubenswrapper[4731]: I1125 15:09:59.063052 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" Nov 25 15:09:59 crc kubenswrapper[4731]: I1125 15:09:59.193405 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-webhook-certs\") pod \"openstack-operator-controller-manager-77d84b6b6b-mdwdb\" (UID: \"7d6095a8-a478-472d-b84b-c99877ddcdf1\") " pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:09:59 crc kubenswrapper[4731]: I1125 15:09:59.193912 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-metrics-certs\") pod \"openstack-operator-controller-manager-77d84b6b6b-mdwdb\" (UID: \"7d6095a8-a478-472d-b84b-c99877ddcdf1\") " pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:09:59 crc kubenswrapper[4731]: I1125 15:09:59.205032 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-webhook-certs\") pod \"openstack-operator-controller-manager-77d84b6b6b-mdwdb\" (UID: \"7d6095a8-a478-472d-b84b-c99877ddcdf1\") " pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:09:59 crc kubenswrapper[4731]: I1125 15:09:59.205118 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d6095a8-a478-472d-b84b-c99877ddcdf1-metrics-certs\") pod \"openstack-operator-controller-manager-77d84b6b6b-mdwdb\" (UID: \"7d6095a8-a478-472d-b84b-c99877ddcdf1\") " pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:09:59 crc kubenswrapper[4731]: I1125 15:09:59.444779 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:10:09 crc kubenswrapper[4731]: I1125 15:10:09.450309 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb"] Nov 25 15:10:10 crc kubenswrapper[4731]: W1125 15:10:10.157131 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d6095a8_a478_472d_b84b_c99877ddcdf1.slice/crio-2c1c80a5f7f530cb49022ce253ae6e358af7a2bdcc91e1a1dad95962028e5ac1 WatchSource:0}: Error finding container 2c1c80a5f7f530cb49022ce253ae6e358af7a2bdcc91e1a1dad95962028e5ac1: Status 404 returned error can't find the container with id 2c1c80a5f7f530cb49022ce253ae6e358af7a2bdcc91e1a1dad95962028e5ac1 Nov 25 15:10:10 crc kubenswrapper[4731]: I1125 15:10:10.560033 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" event={"ID":"7d6095a8-a478-472d-b84b-c99877ddcdf1","Type":"ContainerStarted","Data":"2c1c80a5f7f530cb49022ce253ae6e358af7a2bdcc91e1a1dad95962028e5ac1"} Nov 25 15:10:10 crc kubenswrapper[4731]: I1125 15:10:10.590849 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7"] Nov 25 15:10:11 crc kubenswrapper[4731]: I1125 15:10:11.569809 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" event={"ID":"1bdef4b3-96db-42a5-aefd-2641cced3d1e","Type":"ContainerStarted","Data":"826a456747948a6826085b142fb6e6094d954291d94d2b449c833cbf113f4fb9"} Nov 25 15:10:12 crc kubenswrapper[4731]: E1125 15:10:12.079323 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 25 15:10:12 crc kubenswrapper[4731]: E1125 15:10:12.079637 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g2zjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-sk2wl_openstack-operators(158dbcc7-a9af-464b-9886-4f5f6f4bbf71): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:10:12 crc kubenswrapper[4731]: E1125 15:10:12.080943 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack-operators/watcher-operator-controller-manager-864885998-sk2wl" podUID="158dbcc7-a9af-464b-9886-4f5f6f4bbf71" Nov 25 15:10:13 crc kubenswrapper[4731]: I1125 15:10:13.585776 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" event={"ID":"7d6095a8-a478-472d-b84b-c99877ddcdf1","Type":"ContainerStarted","Data":"cb4ab2c63c97452d9dce3c26c1c6380b23ad8d4d986cf93204373adea9b3f749"} Nov 25 15:10:13 crc kubenswrapper[4731]: I1125 15:10:13.591229 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hxwn7" event={"ID":"f5180254-3122-4550-b259-f870f4575073","Type":"ContainerStarted","Data":"ab4d6dfed5f18aa1f3a16d53ec0c3e2aba3df4e464e656ff7bd732a7cc43865d"} Nov 25 15:10:13 crc kubenswrapper[4731]: I1125 15:10:13.593434 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wmjzm" event={"ID":"77bbfdb5-d5b4-4512-ac18-e3d787eea6f2","Type":"ContainerStarted","Data":"1a6fd34a3bb86c467e31be9e7ef118e238a552df7a0895a217c63b2e2577fba3"} Nov 25 15:10:13 crc kubenswrapper[4731]: I1125 15:10:13.595348 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-kxfnt" event={"ID":"ab316c81-bc21-47b6-8dbd-bd6c26cc2769","Type":"ContainerStarted","Data":"98e51e41e17eb69e26ac0831a472f9a4567d9099491f7e6bfb72bb863b5e5996"} Nov 25 15:10:14 crc kubenswrapper[4731]: I1125 15:10:14.603587 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-kn45z" event={"ID":"e4f0ff64-5199-4a19-8657-08ff58d915da","Type":"ContainerStarted","Data":"cdda39bcd7f2eecdc1b02c96f2f8297712c08270857137bbedc19539dbb4fbf7"} Nov 25 15:10:14 crc kubenswrapper[4731]: I1125 15:10:14.605008 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-2npjw" event={"ID":"1af6726c-a2ca-43a1-b932-cddcd28457d1","Type":"ContainerStarted","Data":"b37a8d575bf62b3f6ec51162fb8c109a72f17a22cd996f91f6252fcabc98d739"} Nov 25 15:10:14 crc kubenswrapper[4731]: I1125 15:10:14.606424 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-9w7kw" event={"ID":"54f47a6e-c9be-49a3-89af-43ecbd6b7b93","Type":"ContainerStarted","Data":"cf8062faf5eb87ab6be09c13f6bb9739f5ac66a13e2ff128d60add38cef059f3"} Nov 25 15:10:14 crc kubenswrapper[4731]: I1125 15:10:14.607565 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-vldj4" event={"ID":"a706f3bc-21c7-416e-b7cb-f0886804bc08","Type":"ContainerStarted","Data":"93f5f1103717bdad005f38545fd6c6999c4ecceac77fc9619b225eba18cc6701"} Nov 25 15:10:14 crc kubenswrapper[4731]: I1125 15:10:14.608863 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8w79f" event={"ID":"92f78ba4-d5a9-43f8-bb6d-7d226c301ca4","Type":"ContainerStarted","Data":"71f0a77062b902db86c850b8b01fb4653da7a52afde252d25750fa8e2d5cd3ff"} Nov 25 15:10:14 crc kubenswrapper[4731]: I1125 15:10:14.611434 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-7kltb" event={"ID":"01518faf-8637-4852-b877-b5c8ca2c7d91","Type":"ContainerStarted","Data":"f36c968282af60d9a27e8ba9d30047925115754b633f4d1bb801fede0168b6d2"} Nov 25 15:10:14 crc kubenswrapper[4731]: I1125 15:10:14.612728 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-h4whj" event={"ID":"83e6ff4c-b0f0-4d61-8923-31c0d3f1398f","Type":"ContainerStarted","Data":"6614ce88552ef247b7133b41dbc44ae45f136c7fd93923980d08f61d6b79a1da"} Nov 25 15:10:14 crc kubenswrapper[4731]: I1125 15:10:14.613999 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-qjrbz" event={"ID":"39362f78-55ff-4380-80b3-c46e2d004749","Type":"ContainerStarted","Data":"365c34964166827d8204f59b1ded29e647e0546ae96a8f68f485b9a942ba5de6"} Nov 25 15:10:14 crc kubenswrapper[4731]: I1125 15:10:14.615333 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r52fx" event={"ID":"8212bdfe-bb4a-4865-9ef9-ba58232188f7","Type":"ContainerStarted","Data":"eea9e70245509246891f8216a95fe7853489d6be42c6dcdc70f26bd38fa4b250"} Nov 25 15:10:14 crc kubenswrapper[4731]: I1125 15:10:14.616623 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" event={"ID":"d195c5e8-4c33-44ad-ba19-deadfc6e21ef","Type":"ContainerStarted","Data":"e0d44910a284b4e8d3e70e850ee72fe756d1a52a4675916c1bce83aa6b1f4f81"} Nov 25 15:10:14 crc kubenswrapper[4731]: I1125 15:10:14.617866 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sh9ng" event={"ID":"fa757060-f39f-4482-8fcb-41ae41ef9cb9","Type":"ContainerStarted","Data":"21ed0f4d81fd7086c445246cf79a14b80898642daa2646c44a6b8c7e37b5d6e2"} Nov 25 15:10:14 crc kubenswrapper[4731]: I1125 15:10:14.618024 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:10:14 crc kubenswrapper[4731]: I1125 15:10:14.659048 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" podStartSLOduration=19.659024704 podStartE2EDuration="19.659024704s" podCreationTimestamp="2025-11-25 15:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:10:14.655283941 +0000 UTC m=+973.402418371" watchObservedRunningTime="2025-11-25 15:10:14.659024704 +0000 UTC m=+973.406159124" Nov 25 15:10:14 crc kubenswrapper[4731]: E1125 15:10:14.976104 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-42nn8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-cb6c4fdb7-7kltb_openstack-operators(01518faf-8637-4852-b877-b5c8ca2c7d91): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Nov 25 15:10:14 crc kubenswrapper[4731]: E1125 15:10:14.979082 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-7kltb" podUID="01518faf-8637-4852-b877-b5c8ca2c7d91" Nov 25 15:10:15 crc kubenswrapper[4731]: I1125 15:10:15.629797 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-7kltb" Nov 25 15:10:15 crc kubenswrapper[4731]: E1125 15:10:15.739841 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-7kltb" podUID="01518faf-8637-4852-b877-b5c8ca2c7d91" Nov 25 15:10:16 crc kubenswrapper[4731]: I1125 15:10:16.638911 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql" event={"ID":"1eedb0ec-4f2e-43da-8e15-aae0436e942e","Type":"ContainerStarted","Data":"7ea24c115488dcab6862b7d3bce1967020216b606d775c06075087ebd5ed3d72"} Nov 25 15:10:17 crc kubenswrapper[4731]: I1125 15:10:17.648867 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-7kltb" event={"ID":"01518faf-8637-4852-b877-b5c8ca2c7d91","Type":"ContainerStarted","Data":"a196a7a5bb1f724f02c80765ae96b74f055bbb30b4324ff4d1c0e07747bccc27"} Nov 25 15:10:17 crc kubenswrapper[4731]: I1125 15:10:17.672280 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-7kltb" podStartSLOduration=11.253632623 podStartE2EDuration="23.672263137s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.197392109 +0000 UTC m=+954.944526539" lastFinishedPulling="2025-11-25 15:10:08.616022623 +0000 UTC m=+967.363157053" observedRunningTime="2025-11-25 15:10:17.671725834 +0000 UTC m=+976.418860264" watchObservedRunningTime="2025-11-25 15:10:17.672263137 +0000 UTC m=+976.419397567" Nov 25 15:10:19 crc kubenswrapper[4731]: I1125 15:10:19.455309 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-77d84b6b6b-mdwdb" Nov 25 15:10:20 crc kubenswrapper[4731]: I1125 15:10:20.039810 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:10:20 crc kubenswrapper[4731]: I1125 15:10:20.039899 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:10:25 crc kubenswrapper[4731]: I1125 15:10:25.082438 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-cb6c4fdb7-7kltb" Nov 25 15:10:40 crc kubenswrapper[4731]: E1125 15:10:40.745151 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6" Nov 25 15:10:40 crc kubenswrapper[4731]: E1125 15:10:40.746294 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w7qcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-7c57c8bbc4-j2wt5_openstack-operators(f5ea628e-44a7-499c-a433-14d18a9bb804): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:10:40 crc kubenswrapper[4731]: E1125 15:10:40.819004 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0" Nov 25 15:10:40 crc kubenswrapper[4731]: E1125 15:10:40.819277 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rmch8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6fdc4fcf86-ft6fb_openstack-operators(bc21926a-c3cf-49fa-addf-01d3156c1b5f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:10:42 crc kubenswrapper[4731]: E1125 15:10:42.545278 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd" Nov 25 15:10:42 crc kubenswrapper[4731]: E1125 15:10:42.545930 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k6bkp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-544b9bb9-kctp7_openstack-operators(1bdef4b3-96db-42a5-aefd-2641cced3d1e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:10:43 crc kubenswrapper[4731]: E1125 15:10:43.318126 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Nov 25 15:10:43 crc kubenswrapper[4731]: E1125 15:10:43.319330 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nvm2d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-6tddn_openstack-operators(2aeac7ab-78a9-49f2-9640-0c91b73bbe57): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:10:43 crc kubenswrapper[4731]: E1125 15:10:43.320845 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6tddn" podUID="2aeac7ab-78a9-49f2-9640-0c91b73bbe57" Nov 25 15:10:44 crc kubenswrapper[4731]: E1125 15:10:44.680412 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c" Nov 25 15:10:44 crc kubenswrapper[4731]: E1125 15:10:44.680913 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nmrbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5db546f9d9-8m7t9_openstack-operators(61221a67-95aa-4e96-a41c-4d9343305157): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:10:44 crc kubenswrapper[4731]: E1125 15:10:44.790048 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.107:5001/openstack-k8s-operators/telemetry-operator:dbb3429c35dcb61ad9e85aef0b186127dd7e203f" Nov 25 15:10:44 crc kubenswrapper[4731]: E1125 15:10:44.790201 4731 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.107:5001/openstack-k8s-operators/telemetry-operator:dbb3429c35dcb61ad9e85aef0b186127dd7e203f" Nov 25 15:10:44 crc kubenswrapper[4731]: E1125 15:10:44.790366 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.107:5001/openstack-k8s-operators/telemetry-operator:dbb3429c35dcb61ad9e85aef0b186127dd7e203f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qgs4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-75cd8d4b6c-kkqgj_openstack-operators(089e8703-f71e-4f81-91af-19224ba7b1b4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.156179 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.156361 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xl7q4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79556f57fc-qjrbz_openstack-operators(39362f78-55ff-4380-80b3-c46e2d004749): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.156410 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.156482 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z2cgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-d5cc86f4b-2sf58_openstack-operators(d195c5e8-4c33-44ad-ba19-deadfc6e21ef): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.156779 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.156885 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying layer: context canceled" image="quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.157091 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dpgt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-58bb8d67cc-2npjw_openstack-operators(1af6726c-a2ca-43a1-b932-cddcd28457d1): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.157542 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-qjrbz" podUID="39362f78-55ff-4380-80b3-c46e2d004749" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.157676 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" podUID="d195c5e8-4c33-44ad-ba19-deadfc6e21ef" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.158142 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-56sl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5bfcdc958c-sh9ng_openstack-operators(fa757060-f39f-4482-8fcb-41ae41ef9cb9): ErrImagePull: rpc error: code = Canceled desc = copying layer: context canceled" logger="UnhandledError" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.158301 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-2npjw" podUID="1af6726c-a2ca-43a1-b932-cddcd28457d1" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.160331 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"rpc error: code = Canceled desc = copying layer: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sh9ng" podUID="fa757060-f39f-4482-8fcb-41ae41ef9cb9" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.186080 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.186367 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g2zjx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-864885998-sk2wl_openstack-operators(158dbcc7-a9af-464b-9886-4f5f6f4bbf71): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.687779 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" podUID="61221a67-95aa-4e96-a41c-4d9343305157" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.732715 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" podUID="f5ea628e-44a7-499c-a433-14d18a9bb804" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.765078 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" podUID="bc21926a-c3cf-49fa-addf-01d3156c1b5f" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.827144 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" podUID="1bdef4b3-96db-42a5-aefd-2641cced3d1e" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.827450 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" podUID="089e8703-f71e-4f81-91af-19224ba7b1b4" Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.877567 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-sk2wl" podUID="158dbcc7-a9af-464b-9886-4f5f6f4bbf71" Nov 25 15:10:45 crc kubenswrapper[4731]: I1125 15:10:45.917548 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" event={"ID":"f5ea628e-44a7-499c-a433-14d18a9bb804","Type":"ContainerStarted","Data":"3248af28bb3893810df87647a66e3fb2d1550439bcafd0fb934aebe8feab0a3a"} Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.919440 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" podUID="f5ea628e-44a7-499c-a433-14d18a9bb804" Nov 25 15:10:45 crc kubenswrapper[4731]: I1125 15:10:45.928911 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-sk2wl" event={"ID":"158dbcc7-a9af-464b-9886-4f5f6f4bbf71","Type":"ContainerStarted","Data":"a01f3962610075aa230e8b6e195bb6be6f3d577d401c160b719b2062155829a6"} Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.934798 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-sk2wl" podUID="158dbcc7-a9af-464b-9886-4f5f6f4bbf71" Nov 25 15:10:45 crc kubenswrapper[4731]: I1125 15:10:45.939434 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" event={"ID":"61221a67-95aa-4e96-a41c-4d9343305157","Type":"ContainerStarted","Data":"72bfda319701484030714153a3450c2a559a0c0230f40cb35709e7e377cf8a79"} Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.941585 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" podUID="61221a67-95aa-4e96-a41c-4d9343305157" Nov 25 15:10:45 crc kubenswrapper[4731]: I1125 15:10:45.951778 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-kn45z" event={"ID":"e4f0ff64-5199-4a19-8657-08ff58d915da","Type":"ContainerStarted","Data":"a3c3ee7c8ed48cb89036cc2bfe8e156a9ecf3e950b4a0afd0209dd7e561dcdc0"} Nov 25 15:10:45 crc kubenswrapper[4731]: I1125 15:10:45.952483 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-kn45z" Nov 25 15:10:45 crc kubenswrapper[4731]: I1125 15:10:45.954736 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-kn45z" Nov 25 15:10:45 crc kubenswrapper[4731]: I1125 15:10:45.964272 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-vldj4" podStartSLOduration=2.519884637 podStartE2EDuration="51.964257073s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:55.81593972 +0000 UTC m=+954.563074150" lastFinishedPulling="2025-11-25 15:10:45.260312156 +0000 UTC m=+1004.007446586" observedRunningTime="2025-11-25 15:10:45.964187092 +0000 UTC m=+1004.711321522" watchObservedRunningTime="2025-11-25 15:10:45.964257073 +0000 UTC m=+1004.711391503" Nov 25 15:10:45 crc kubenswrapper[4731]: I1125 15:10:45.964519 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" event={"ID":"089e8703-f71e-4f81-91af-19224ba7b1b4","Type":"ContainerStarted","Data":"28ba898ac498d7fda11b6592e929d50a16d4224d8593f6b87ecc4e5859451db8"} Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.968266 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.107:5001/openstack-k8s-operators/telemetry-operator:dbb3429c35dcb61ad9e85aef0b186127dd7e203f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" podUID="089e8703-f71e-4f81-91af-19224ba7b1b4" Nov 25 15:10:45 crc kubenswrapper[4731]: I1125 15:10:45.981349 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r52fx" event={"ID":"8212bdfe-bb4a-4865-9ef9-ba58232188f7","Type":"ContainerStarted","Data":"e1e707df71a666dae8eab3ee2c007287d0be727446409e3e03ceec7fcbca723c"} Nov 25 15:10:45 crc kubenswrapper[4731]: I1125 15:10:45.981400 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r52fx" Nov 25 15:10:45 crc kubenswrapper[4731]: I1125 15:10:45.986147 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r52fx" Nov 25 15:10:45 crc kubenswrapper[4731]: I1125 15:10:45.994229 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5cb74df96-9w7kw" event={"ID":"54f47a6e-c9be-49a3-89af-43ecbd6b7b93","Type":"ContainerStarted","Data":"6f83a605593fc8f6ac264045652da665099787a4a8bf262de4f647eb615cb69e"} Nov 25 15:10:45 crc kubenswrapper[4731]: I1125 15:10:45.995381 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5cb74df96-9w7kw" Nov 25 15:10:45 crc kubenswrapper[4731]: I1125 15:10:45.998124 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" event={"ID":"bc21926a-c3cf-49fa-addf-01d3156c1b5f","Type":"ContainerStarted","Data":"7b2c1856ed2520501dce9b4b0dfa8a92fa148a6cfebcb0372b89c88a2a16ffce"} Nov 25 15:10:45 crc kubenswrapper[4731]: E1125 15:10:45.999281 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" podUID="bc21926a-c3cf-49fa-addf-01d3156c1b5f" Nov 25 15:10:45 crc kubenswrapper[4731]: I1125 15:10:45.999845 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5cb74df96-9w7kw" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.005139 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-h4whj" event={"ID":"83e6ff4c-b0f0-4d61-8923-31c0d3f1398f","Type":"ContainerStarted","Data":"f5f19a3b943ab0e6d2d646f271c3f7f3a9a7acd5ed80a9b5f88a4443aed816d9"} Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.006287 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-h4whj" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.011145 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-h4whj" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.015527 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" event={"ID":"1bdef4b3-96db-42a5-aefd-2641cced3d1e","Type":"ContainerStarted","Data":"3dce9dd6eb1e0f16661f5a8641b629a7bd6304e771ff0a1495f56118da5998c3"} Nov 25 15:10:46 crc kubenswrapper[4731]: E1125 15:10:46.017061 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" podUID="1bdef4b3-96db-42a5-aefd-2641cced3d1e" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.032903 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-774b86978c-kxfnt" event={"ID":"ab316c81-bc21-47b6-8dbd-bd6c26cc2769","Type":"ContainerStarted","Data":"81fa8f22ffebde5d295131e47ccbac8625d1d8ba10b2e03f14c4470929bc354a"} Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.032991 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-774b86978c-kxfnt" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.039395 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-774b86978c-kxfnt" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.040075 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8w79f" event={"ID":"92f78ba4-d5a9-43f8-bb6d-7d226c301ca4","Type":"ContainerStarted","Data":"4f8c2a4037ae5fafa01ba7d7761abdd1388208c9b7f5a89c850b35bfac3558fc"} Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.040533 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8w79f" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.045266 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8w79f" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.049097 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hxwn7" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.053850 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hxwn7" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.054285 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5cb74df96-9w7kw" podStartSLOduration=3.259406095 podStartE2EDuration="52.054267521s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.582123889 +0000 UTC m=+955.329258319" lastFinishedPulling="2025-11-25 15:10:45.376985315 +0000 UTC m=+1004.124119745" observedRunningTime="2025-11-25 15:10:46.048868786 +0000 UTC m=+1004.796003216" watchObservedRunningTime="2025-11-25 15:10:46.054267521 +0000 UTC m=+1004.801401951" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.057352 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wmjzm" event={"ID":"77bbfdb5-d5b4-4512-ac18-e3d787eea6f2","Type":"ContainerStarted","Data":"dda6ce8e4a2cd43b784d5572d2d473e0e8cf2b3bf90d74a5b2506edcaaba06fd"} Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.057976 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wmjzm" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.060340 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wmjzm" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.075647 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql" event={"ID":"1eedb0ec-4f2e-43da-8e15-aae0436e942e","Type":"ContainerStarted","Data":"af330c18d792c00fe2d6eb3415cf28071cdf5e428b6b2db6640cf82c7ff5d90c"} Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.075761 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.075779 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sh9ng" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.076556 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-2npjw" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.077432 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-qjrbz" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.077460 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.082506 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-2npjw" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.082779 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.084050 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-qjrbz" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.088643 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sh9ng" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.103059 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.105426 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c9694994-kn45z" podStartSLOduration=29.630091766 podStartE2EDuration="52.105398915s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.238442276 +0000 UTC m=+954.985576716" lastFinishedPulling="2025-11-25 15:10:18.713749425 +0000 UTC m=+977.460883865" observedRunningTime="2025-11-25 15:10:46.088267765 +0000 UTC m=+1004.835402195" watchObservedRunningTime="2025-11-25 15:10:46.105398915 +0000 UTC m=+1004.852533345" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.227859 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-748dc6576f-h4whj" podStartSLOduration=3.296507659 podStartE2EDuration="52.227839997s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.238371894 +0000 UTC m=+954.985506324" lastFinishedPulling="2025-11-25 15:10:45.169704232 +0000 UTC m=+1003.916838662" observedRunningTime="2025-11-25 15:10:46.196138802 +0000 UTC m=+1004.943273222" watchObservedRunningTime="2025-11-25 15:10:46.227839997 +0000 UTC m=+1004.974974427" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.316375 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-68b95954c9-r52fx" podStartSLOduration=3.314826592 podStartE2EDuration="52.316354089s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.179863149 +0000 UTC m=+954.926997579" lastFinishedPulling="2025-11-25 15:10:45.181390646 +0000 UTC m=+1003.928525076" observedRunningTime="2025-11-25 15:10:46.302175793 +0000 UTC m=+1005.049310223" watchObservedRunningTime="2025-11-25 15:10:46.316354089 +0000 UTC m=+1005.063488519" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.372400 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-774b86978c-kxfnt" podStartSLOduration=3.9062266660000002 podStartE2EDuration="52.372374075s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.231432544 +0000 UTC m=+954.978566974" lastFinishedPulling="2025-11-25 15:10:44.697579953 +0000 UTC m=+1003.444714383" observedRunningTime="2025-11-25 15:10:46.357156033 +0000 UTC m=+1005.104290483" watchObservedRunningTime="2025-11-25 15:10:46.372374075 +0000 UTC m=+1005.119508515" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.412253 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hxwn7" podStartSLOduration=2.80988882 podStartE2EDuration="52.412222725s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:55.823159847 +0000 UTC m=+954.570294277" lastFinishedPulling="2025-11-25 15:10:45.425493752 +0000 UTC m=+1004.172628182" observedRunningTime="2025-11-25 15:10:46.398802298 +0000 UTC m=+1005.145936728" watchObservedRunningTime="2025-11-25 15:10:46.412222725 +0000 UTC m=+1005.159357165" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.474798 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-66cf5c67ff-wmjzm" podStartSLOduration=3.855694918 podStartE2EDuration="52.473669237s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.568050463 +0000 UTC m=+955.315184893" lastFinishedPulling="2025-11-25 15:10:45.186024782 +0000 UTC m=+1003.933159212" observedRunningTime="2025-11-25 15:10:46.433667903 +0000 UTC m=+1005.180802383" watchObservedRunningTime="2025-11-25 15:10:46.473669237 +0000 UTC m=+1005.220803667" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.537008 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-79856dc55c-8w79f" podStartSLOduration=3.174146962 podStartE2EDuration="52.536987506s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:55.823154417 +0000 UTC m=+954.570288857" lastFinishedPulling="2025-11-25 15:10:45.185994981 +0000 UTC m=+1003.933129401" observedRunningTime="2025-11-25 15:10:46.535395956 +0000 UTC m=+1005.282530396" watchObservedRunningTime="2025-11-25 15:10:46.536987506 +0000 UTC m=+1005.284121936" Nov 25 15:10:46 crc kubenswrapper[4731]: I1125 15:10:46.595032 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-fd75fd47d-bdwql" podStartSLOduration=33.446170482 podStartE2EDuration="52.595000042s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.60135585 +0000 UTC m=+955.348490280" lastFinishedPulling="2025-11-25 15:10:15.75018541 +0000 UTC m=+974.497319840" observedRunningTime="2025-11-25 15:10:46.578130779 +0000 UTC m=+1005.325265199" watchObservedRunningTime="2025-11-25 15:10:46.595000042 +0000 UTC m=+1005.342134572" Nov 25 15:10:47 crc kubenswrapper[4731]: I1125 15:10:47.084618 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" event={"ID":"d195c5e8-4c33-44ad-ba19-deadfc6e21ef","Type":"ContainerStarted","Data":"be759c661376700aac5cc8a4c2766ba5ebb260f641abc3e27d0bef491c62b588"} Nov 25 15:10:47 crc kubenswrapper[4731]: I1125 15:10:47.087250 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sh9ng" event={"ID":"fa757060-f39f-4482-8fcb-41ae41ef9cb9","Type":"ContainerStarted","Data":"26dabed2d52b07fbd7fb53bbf3172002d07a59dde3aa00fa4e1e609d163479f8"} Nov 25 15:10:47 crc kubenswrapper[4731]: I1125 15:10:47.089197 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-2npjw" event={"ID":"1af6726c-a2ca-43a1-b932-cddcd28457d1","Type":"ContainerStarted","Data":"c90fe8cad3a96671b396ac6e1ddc07fa8be91e564a58067422ccd3a269317b52"} Nov 25 15:10:47 crc kubenswrapper[4731]: I1125 15:10:47.091710 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-qjrbz" event={"ID":"39362f78-55ff-4380-80b3-c46e2d004749","Type":"ContainerStarted","Data":"746153bb03ad84336c6bb630e83c5e86ad1240c32a55cd54a7b0e23a89061cf1"} Nov 25 15:10:47 crc kubenswrapper[4731]: I1125 15:10:47.094389 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-vldj4" event={"ID":"a706f3bc-21c7-416e-b7cb-f0886804bc08","Type":"ContainerStarted","Data":"f3fcde0649711876ef47efe9ed5d909cc5fbeed04be1e02fe259fd74119d5906"} Nov 25 15:10:47 crc kubenswrapper[4731]: I1125 15:10:47.094876 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-vldj4" Nov 25 15:10:47 crc kubenswrapper[4731]: I1125 15:10:47.099062 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-86dc4d89c8-vldj4" Nov 25 15:10:47 crc kubenswrapper[4731]: I1125 15:10:47.099094 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hxwn7" event={"ID":"f5180254-3122-4550-b259-f870f4575073","Type":"ContainerStarted","Data":"2fa28a27e163ed15cc8d73811393f53321b6bb19cc3ef437a59f17002f4f389e"} Nov 25 15:10:47 crc kubenswrapper[4731]: E1125 15:10:47.101107 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:78852f8ba332a5756c1551c126157f735279101a0fc3277ba4aa4db3478789dd\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" podUID="1bdef4b3-96db-42a5-aefd-2641cced3d1e" Nov 25 15:10:47 crc kubenswrapper[4731]: E1125 15:10:47.101392 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-sk2wl" podUID="158dbcc7-a9af-464b-9886-4f5f6f4bbf71" Nov 25 15:10:47 crc kubenswrapper[4731]: I1125 15:10:47.120208 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-d5cc86f4b-2sf58" podStartSLOduration=38.476608709 podStartE2EDuration="53.120178672s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:57.245092215 +0000 UTC m=+955.992226645" lastFinishedPulling="2025-11-25 15:10:11.888662138 +0000 UTC m=+970.635796608" observedRunningTime="2025-11-25 15:10:47.112773487 +0000 UTC m=+1005.859907917" watchObservedRunningTime="2025-11-25 15:10:47.120178672 +0000 UTC m=+1005.867313102" Nov 25 15:10:47 crc kubenswrapper[4731]: I1125 15:10:47.152862 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79556f57fc-qjrbz" podStartSLOduration=38.472215067 podStartE2EDuration="53.152834702s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.554099211 +0000 UTC m=+955.301233641" lastFinishedPulling="2025-11-25 15:10:11.234718836 +0000 UTC m=+969.981853276" observedRunningTime="2025-11-25 15:10:47.133291752 +0000 UTC m=+1005.880426182" watchObservedRunningTime="2025-11-25 15:10:47.152834702 +0000 UTC m=+1005.899969132" Nov 25 15:10:47 crc kubenswrapper[4731]: I1125 15:10:47.227043 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5bfcdc958c-sh9ng" podStartSLOduration=38.583937089 podStartE2EDuration="53.227012124s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.238969069 +0000 UTC m=+954.986103499" lastFinishedPulling="2025-11-25 15:10:10.882044104 +0000 UTC m=+969.629178534" observedRunningTime="2025-11-25 15:10:47.209433972 +0000 UTC m=+1005.956568412" watchObservedRunningTime="2025-11-25 15:10:47.227012124 +0000 UTC m=+1005.974146564" Nov 25 15:10:47 crc kubenswrapper[4731]: I1125 15:10:47.242343 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-2npjw" podStartSLOduration=38.282310316 podStartE2EDuration="53.242317218s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.243544631 +0000 UTC m=+954.990679061" lastFinishedPulling="2025-11-25 15:10:11.203551533 +0000 UTC m=+969.950685963" observedRunningTime="2025-11-25 15:10:47.235849575 +0000 UTC m=+1005.982984005" watchObservedRunningTime="2025-11-25 15:10:47.242317218 +0000 UTC m=+1005.989451648" Nov 25 15:10:50 crc kubenswrapper[4731]: I1125 15:10:50.040208 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:10:50 crc kubenswrapper[4731]: I1125 15:10:50.040755 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:10:50 crc kubenswrapper[4731]: I1125 15:10:50.040871 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 15:10:50 crc kubenswrapper[4731]: I1125 15:10:50.041873 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0c9a20a237355e5d71b9ca12829a2374c42e2593ae50b43c8d42515f2f362b09"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:10:50 crc kubenswrapper[4731]: I1125 15:10:50.042014 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://0c9a20a237355e5d71b9ca12829a2374c42e2593ae50b43c8d42515f2f362b09" gracePeriod=600 Nov 25 15:10:51 crc kubenswrapper[4731]: I1125 15:10:51.141215 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="0c9a20a237355e5d71b9ca12829a2374c42e2593ae50b43c8d42515f2f362b09" exitCode=0 Nov 25 15:10:51 crc kubenswrapper[4731]: I1125 15:10:51.141315 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"0c9a20a237355e5d71b9ca12829a2374c42e2593ae50b43c8d42515f2f362b09"} Nov 25 15:10:51 crc kubenswrapper[4731]: I1125 15:10:51.141690 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"f9bc0406706a4644ebe665a4db06c99d1f92fdd1f784b0fafdbe78a287d513a2"} Nov 25 15:10:51 crc kubenswrapper[4731]: I1125 15:10:51.141723 4731 scope.go:117] "RemoveContainer" containerID="0455c99facc14bf1fc68ee4a3d36481680ea8d93970fec20b06e0683abb73b7e" Nov 25 15:10:55 crc kubenswrapper[4731]: E1125 15:10:55.815462 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6tddn" podUID="2aeac7ab-78a9-49f2-9640-0c91b73bbe57" Nov 25 15:10:56 crc kubenswrapper[4731]: E1125 15:10:56.813488 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:c0b5f124a37c1538042c0e63f0978429572e2a851d7f3a6eb80de09b86d755a0\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" podUID="bc21926a-c3cf-49fa-addf-01d3156c1b5f" Nov 25 15:10:57 crc kubenswrapper[4731]: E1125 15:10:57.813929 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:4094e7fc11a33e8e2b6768a053cafaf5b122446d23f9113d43d520cb64e9776c\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" podUID="61221a67-95aa-4e96-a41c-4d9343305157" Nov 25 15:10:57 crc kubenswrapper[4731]: E1125 15:10:57.814062 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:4838402d41d42c56613d43dc5041aae475a2b18e6172491d6c4d4a78a580697f\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-864885998-sk2wl" podUID="158dbcc7-a9af-464b-9886-4f5f6f4bbf71" Nov 25 15:10:58 crc kubenswrapper[4731]: E1125 15:10:58.814009 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:207578cb433471cc1a79c21a808c8a15489d1d3c9fa77e29f3f697c33917fec6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" podUID="f5ea628e-44a7-499c-a433-14d18a9bb804" Nov 25 15:10:59 crc kubenswrapper[4731]: E1125 15:10:59.813173 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.107:5001/openstack-k8s-operators/telemetry-operator:dbb3429c35dcb61ad9e85aef0b186127dd7e203f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" podUID="089e8703-f71e-4f81-91af-19224ba7b1b4" Nov 25 15:11:00 crc kubenswrapper[4731]: I1125 15:11:00.815401 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:11:02 crc kubenswrapper[4731]: I1125 15:11:02.239139 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" event={"ID":"1bdef4b3-96db-42a5-aefd-2641cced3d1e","Type":"ContainerStarted","Data":"f2e96e9820d76a0eff0ea0d4dbb0c7ce4caf09494aaf842df7c1d577a256783c"} Nov 25 15:11:02 crc kubenswrapper[4731]: I1125 15:11:02.241327 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" Nov 25 15:11:02 crc kubenswrapper[4731]: I1125 15:11:02.276931 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" podStartSLOduration=17.957762343 podStartE2EDuration="1m8.276911856s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:10:11.241022344 +0000 UTC m=+969.988156774" lastFinishedPulling="2025-11-25 15:11:01.560171857 +0000 UTC m=+1020.307306287" observedRunningTime="2025-11-25 15:11:02.27192885 +0000 UTC m=+1021.019063340" watchObservedRunningTime="2025-11-25 15:11:02.276911856 +0000 UTC m=+1021.024046286" Nov 25 15:11:09 crc kubenswrapper[4731]: I1125 15:11:09.069914 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-544b9bb9-kctp7" Nov 25 15:11:09 crc kubenswrapper[4731]: I1125 15:11:09.302928 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" event={"ID":"bc21926a-c3cf-49fa-addf-01d3156c1b5f","Type":"ContainerStarted","Data":"369fe1ff1b16b2938e2f99328f3ead7bd2d085f63f4afc00ffdf61f9824bf2e2"} Nov 25 15:11:09 crc kubenswrapper[4731]: I1125 15:11:09.303638 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" Nov 25 15:11:09 crc kubenswrapper[4731]: I1125 15:11:09.329078 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" podStartSLOduration=3.21696686 podStartE2EDuration="1m15.329049211s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.60052681 +0000 UTC m=+955.347661240" lastFinishedPulling="2025-11-25 15:11:08.712609161 +0000 UTC m=+1027.459743591" observedRunningTime="2025-11-25 15:11:09.324330773 +0000 UTC m=+1028.071465203" watchObservedRunningTime="2025-11-25 15:11:09.329049211 +0000 UTC m=+1028.076183641" Nov 25 15:11:10 crc kubenswrapper[4731]: I1125 15:11:10.314537 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6tddn" event={"ID":"2aeac7ab-78a9-49f2-9640-0c91b73bbe57","Type":"ContainerStarted","Data":"fe1cf5c1cec04f76ce6806b165d989f66837a4358d891e8d07d0922a5ee37d34"} Nov 25 15:11:10 crc kubenswrapper[4731]: I1125 15:11:10.333877 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-6tddn" podStartSLOduration=2.822188156 podStartE2EDuration="1m15.333859339s" podCreationTimestamp="2025-11-25 15:09:55 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.715599794 +0000 UTC m=+955.462734224" lastFinishedPulling="2025-11-25 15:11:09.227270977 +0000 UTC m=+1027.974405407" observedRunningTime="2025-11-25 15:11:10.333168832 +0000 UTC m=+1029.080303262" watchObservedRunningTime="2025-11-25 15:11:10.333859339 +0000 UTC m=+1029.080993769" Nov 25 15:11:12 crc kubenswrapper[4731]: I1125 15:11:12.329672 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" event={"ID":"61221a67-95aa-4e96-a41c-4d9343305157","Type":"ContainerStarted","Data":"f76ab39f1b168dae3f7eac1d5a6b8a9505d79f1fdcf3bb56d4b3e8d689f5e402"} Nov 25 15:11:12 crc kubenswrapper[4731]: I1125 15:11:12.330433 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" Nov 25 15:11:12 crc kubenswrapper[4731]: I1125 15:11:12.332243 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" event={"ID":"089e8703-f71e-4f81-91af-19224ba7b1b4","Type":"ContainerStarted","Data":"ad006b35098d44a942ae327c14f76438092fb2dfb9181dc128380797a955ca8c"} Nov 25 15:11:12 crc kubenswrapper[4731]: I1125 15:11:12.332476 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" Nov 25 15:11:12 crc kubenswrapper[4731]: I1125 15:11:12.353071 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" podStartSLOduration=3.63292635 podStartE2EDuration="1m18.353051004s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.582317953 +0000 UTC m=+955.329452383" lastFinishedPulling="2025-11-25 15:11:11.302442607 +0000 UTC m=+1030.049577037" observedRunningTime="2025-11-25 15:11:12.352137711 +0000 UTC m=+1031.099272141" watchObservedRunningTime="2025-11-25 15:11:12.353051004 +0000 UTC m=+1031.100185434" Nov 25 15:11:12 crc kubenswrapper[4731]: I1125 15:11:12.379164 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" podStartSLOduration=3.178442573 podStartE2EDuration="1m18.379135659s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.67388254 +0000 UTC m=+955.421016970" lastFinishedPulling="2025-11-25 15:11:11.874575626 +0000 UTC m=+1030.621710056" observedRunningTime="2025-11-25 15:11:12.373789045 +0000 UTC m=+1031.120923485" watchObservedRunningTime="2025-11-25 15:11:12.379135659 +0000 UTC m=+1031.126270089" Nov 25 15:11:13 crc kubenswrapper[4731]: I1125 15:11:13.341974 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" event={"ID":"f5ea628e-44a7-499c-a433-14d18a9bb804","Type":"ContainerStarted","Data":"ef621beb000c6a325c7475e9faf8b3403c9e56af3ae0d211f5e01dfde6454c6a"} Nov 25 15:11:13 crc kubenswrapper[4731]: I1125 15:11:13.342203 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" Nov 25 15:11:13 crc kubenswrapper[4731]: I1125 15:11:13.344937 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-864885998-sk2wl" event={"ID":"158dbcc7-a9af-464b-9886-4f5f6f4bbf71","Type":"ContainerStarted","Data":"2d5d1814ae366a3e71aa48a0ad3f9dfef05a11efc53c200d34f308a862d7fc85"} Nov 25 15:11:13 crc kubenswrapper[4731]: I1125 15:11:13.345316 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-864885998-sk2wl" Nov 25 15:11:13 crc kubenswrapper[4731]: I1125 15:11:13.366931 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" podStartSLOduration=3.717315089 podStartE2EDuration="1m19.366909299s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.615745363 +0000 UTC m=+955.362879793" lastFinishedPulling="2025-11-25 15:11:12.265339573 +0000 UTC m=+1031.012474003" observedRunningTime="2025-11-25 15:11:13.362210901 +0000 UTC m=+1032.109345341" watchObservedRunningTime="2025-11-25 15:11:13.366909299 +0000 UTC m=+1032.114043729" Nov 25 15:11:13 crc kubenswrapper[4731]: I1125 15:11:13.384253 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-864885998-sk2wl" podStartSLOduration=3.369377989 podStartE2EDuration="1m19.384235654s" podCreationTimestamp="2025-11-25 15:09:54 +0000 UTC" firstStartedPulling="2025-11-25 15:09:56.613756775 +0000 UTC m=+955.360891205" lastFinishedPulling="2025-11-25 15:11:12.62861444 +0000 UTC m=+1031.375748870" observedRunningTime="2025-11-25 15:11:13.379781932 +0000 UTC m=+1032.126916362" watchObservedRunningTime="2025-11-25 15:11:13.384235654 +0000 UTC m=+1032.131370084" Nov 25 15:11:15 crc kubenswrapper[4731]: I1125 15:11:15.520496 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6fdc4fcf86-ft6fb" Nov 25 15:11:25 crc kubenswrapper[4731]: I1125 15:11:25.147435 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" Nov 25 15:11:25 crc kubenswrapper[4731]: I1125 15:11:25.500613 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5db546f9d9-8m7t9" Nov 25 15:11:25 crc kubenswrapper[4731]: I1125 15:11:25.511638 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-864885998-sk2wl" Nov 25 15:11:25 crc kubenswrapper[4731]: I1125 15:11:25.554339 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-75cd8d4b6c-kkqgj" Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.681638 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-nbfn9"] Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.683771 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-nbfn9" Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.686767 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.687965 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.688243 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-tpqb6" Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.688323 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.697340 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-nbfn9"] Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.770234 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-md65g"] Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.776080 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-md65g" Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.787671 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.801255 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hszzl\" (UniqueName: \"kubernetes.io/projected/6fdb6de7-8020-4b94-b4f1-97b4bdc133c7-kube-api-access-hszzl\") pod \"dnsmasq-dns-675f4bcbfc-nbfn9\" (UID: \"6fdb6de7-8020-4b94-b4f1-97b4bdc133c7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-nbfn9" Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.801329 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fdb6de7-8020-4b94-b4f1-97b4bdc133c7-config\") pod \"dnsmasq-dns-675f4bcbfc-nbfn9\" (UID: \"6fdb6de7-8020-4b94-b4f1-97b4bdc133c7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-nbfn9" Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.824981 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-md65g"] Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.902346 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hszzl\" (UniqueName: \"kubernetes.io/projected/6fdb6de7-8020-4b94-b4f1-97b4bdc133c7-kube-api-access-hszzl\") pod \"dnsmasq-dns-675f4bcbfc-nbfn9\" (UID: \"6fdb6de7-8020-4b94-b4f1-97b4bdc133c7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-nbfn9" Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.902416 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0dddbe1-6611-4f1b-ae63-36a1914febdf-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-md65g\" (UID: \"f0dddbe1-6611-4f1b-ae63-36a1914febdf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-md65g" Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.902442 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fdb6de7-8020-4b94-b4f1-97b4bdc133c7-config\") pod \"dnsmasq-dns-675f4bcbfc-nbfn9\" (UID: \"6fdb6de7-8020-4b94-b4f1-97b4bdc133c7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-nbfn9" Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.902474 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0dddbe1-6611-4f1b-ae63-36a1914febdf-config\") pod \"dnsmasq-dns-78dd6ddcc-md65g\" (UID: \"f0dddbe1-6611-4f1b-ae63-36a1914febdf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-md65g" Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.902522 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h8wc\" (UniqueName: \"kubernetes.io/projected/f0dddbe1-6611-4f1b-ae63-36a1914febdf-kube-api-access-7h8wc\") pod \"dnsmasq-dns-78dd6ddcc-md65g\" (UID: \"f0dddbe1-6611-4f1b-ae63-36a1914febdf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-md65g" Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.903482 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fdb6de7-8020-4b94-b4f1-97b4bdc133c7-config\") pod \"dnsmasq-dns-675f4bcbfc-nbfn9\" (UID: \"6fdb6de7-8020-4b94-b4f1-97b4bdc133c7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-nbfn9" Nov 25 15:11:40 crc kubenswrapper[4731]: I1125 15:11:40.920749 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hszzl\" (UniqueName: \"kubernetes.io/projected/6fdb6de7-8020-4b94-b4f1-97b4bdc133c7-kube-api-access-hszzl\") pod \"dnsmasq-dns-675f4bcbfc-nbfn9\" (UID: \"6fdb6de7-8020-4b94-b4f1-97b4bdc133c7\") " pod="openstack/dnsmasq-dns-675f4bcbfc-nbfn9" Nov 25 15:11:41 crc kubenswrapper[4731]: I1125 15:11:41.003697 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h8wc\" (UniqueName: \"kubernetes.io/projected/f0dddbe1-6611-4f1b-ae63-36a1914febdf-kube-api-access-7h8wc\") pod \"dnsmasq-dns-78dd6ddcc-md65g\" (UID: \"f0dddbe1-6611-4f1b-ae63-36a1914febdf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-md65g" Nov 25 15:11:41 crc kubenswrapper[4731]: I1125 15:11:41.003834 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0dddbe1-6611-4f1b-ae63-36a1914febdf-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-md65g\" (UID: \"f0dddbe1-6611-4f1b-ae63-36a1914febdf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-md65g" Nov 25 15:11:41 crc kubenswrapper[4731]: I1125 15:11:41.003884 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0dddbe1-6611-4f1b-ae63-36a1914febdf-config\") pod \"dnsmasq-dns-78dd6ddcc-md65g\" (UID: \"f0dddbe1-6611-4f1b-ae63-36a1914febdf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-md65g" Nov 25 15:11:41 crc kubenswrapper[4731]: I1125 15:11:41.004938 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0dddbe1-6611-4f1b-ae63-36a1914febdf-config\") pod \"dnsmasq-dns-78dd6ddcc-md65g\" (UID: \"f0dddbe1-6611-4f1b-ae63-36a1914febdf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-md65g" Nov 25 15:11:41 crc kubenswrapper[4731]: I1125 15:11:41.005200 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0dddbe1-6611-4f1b-ae63-36a1914febdf-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-md65g\" (UID: \"f0dddbe1-6611-4f1b-ae63-36a1914febdf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-md65g" Nov 25 15:11:41 crc kubenswrapper[4731]: I1125 15:11:41.031780 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h8wc\" (UniqueName: \"kubernetes.io/projected/f0dddbe1-6611-4f1b-ae63-36a1914febdf-kube-api-access-7h8wc\") pod \"dnsmasq-dns-78dd6ddcc-md65g\" (UID: \"f0dddbe1-6611-4f1b-ae63-36a1914febdf\") " pod="openstack/dnsmasq-dns-78dd6ddcc-md65g" Nov 25 15:11:41 crc kubenswrapper[4731]: I1125 15:11:41.044085 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-nbfn9" Nov 25 15:11:41 crc kubenswrapper[4731]: I1125 15:11:41.117017 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-md65g" Nov 25 15:11:41 crc kubenswrapper[4731]: I1125 15:11:41.545323 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-nbfn9"] Nov 25 15:11:41 crc kubenswrapper[4731]: I1125 15:11:41.620197 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-md65g"] Nov 25 15:11:41 crc kubenswrapper[4731]: W1125 15:11:41.620391 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0dddbe1_6611_4f1b_ae63_36a1914febdf.slice/crio-5268576b071807025ec056482da9715e86b7eddc331e3e703b3c8c2ccec6105d WatchSource:0}: Error finding container 5268576b071807025ec056482da9715e86b7eddc331e3e703b3c8c2ccec6105d: Status 404 returned error can't find the container with id 5268576b071807025ec056482da9715e86b7eddc331e3e703b3c8c2ccec6105d Nov 25 15:11:42 crc kubenswrapper[4731]: I1125 15:11:42.553389 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-nbfn9" event={"ID":"6fdb6de7-8020-4b94-b4f1-97b4bdc133c7","Type":"ContainerStarted","Data":"3fde5dd022b138d2982c8986396ff7929a85a2945bc7dbfb51fda15322ed66cb"} Nov 25 15:11:42 crc kubenswrapper[4731]: I1125 15:11:42.559115 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-md65g" event={"ID":"f0dddbe1-6611-4f1b-ae63-36a1914febdf","Type":"ContainerStarted","Data":"5268576b071807025ec056482da9715e86b7eddc331e3e703b3c8c2ccec6105d"} Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.484961 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-nbfn9"] Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.510714 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dmp6l"] Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.512142 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.539723 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dmp6l"] Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.560992 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhjtg\" (UniqueName: \"kubernetes.io/projected/42edf0e1-7def-4b40-8b9f-16601761c5bf-kube-api-access-xhjtg\") pod \"dnsmasq-dns-666b6646f7-dmp6l\" (UID: \"42edf0e1-7def-4b40-8b9f-16601761c5bf\") " pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.561042 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42edf0e1-7def-4b40-8b9f-16601761c5bf-dns-svc\") pod \"dnsmasq-dns-666b6646f7-dmp6l\" (UID: \"42edf0e1-7def-4b40-8b9f-16601761c5bf\") " pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.561070 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42edf0e1-7def-4b40-8b9f-16601761c5bf-config\") pod \"dnsmasq-dns-666b6646f7-dmp6l\" (UID: \"42edf0e1-7def-4b40-8b9f-16601761c5bf\") " pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.662662 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhjtg\" (UniqueName: \"kubernetes.io/projected/42edf0e1-7def-4b40-8b9f-16601761c5bf-kube-api-access-xhjtg\") pod \"dnsmasq-dns-666b6646f7-dmp6l\" (UID: \"42edf0e1-7def-4b40-8b9f-16601761c5bf\") " pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.662714 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42edf0e1-7def-4b40-8b9f-16601761c5bf-dns-svc\") pod \"dnsmasq-dns-666b6646f7-dmp6l\" (UID: \"42edf0e1-7def-4b40-8b9f-16601761c5bf\") " pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.662736 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42edf0e1-7def-4b40-8b9f-16601761c5bf-config\") pod \"dnsmasq-dns-666b6646f7-dmp6l\" (UID: \"42edf0e1-7def-4b40-8b9f-16601761c5bf\") " pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.663701 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42edf0e1-7def-4b40-8b9f-16601761c5bf-config\") pod \"dnsmasq-dns-666b6646f7-dmp6l\" (UID: \"42edf0e1-7def-4b40-8b9f-16601761c5bf\") " pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.666020 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42edf0e1-7def-4b40-8b9f-16601761c5bf-dns-svc\") pod \"dnsmasq-dns-666b6646f7-dmp6l\" (UID: \"42edf0e1-7def-4b40-8b9f-16601761c5bf\") " pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.713318 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhjtg\" (UniqueName: \"kubernetes.io/projected/42edf0e1-7def-4b40-8b9f-16601761c5bf-kube-api-access-xhjtg\") pod \"dnsmasq-dns-666b6646f7-dmp6l\" (UID: \"42edf0e1-7def-4b40-8b9f-16601761c5bf\") " pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.803107 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-md65g"] Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.829548 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sbwr6"] Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.830910 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.842537 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sbwr6"] Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.845513 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.868613 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdblr\" (UniqueName: \"kubernetes.io/projected/ba772372-7abb-4468-985f-769cf2ece70b-kube-api-access-pdblr\") pod \"dnsmasq-dns-57d769cc4f-sbwr6\" (UID: \"ba772372-7abb-4468-985f-769cf2ece70b\") " pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.869434 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba772372-7abb-4468-985f-769cf2ece70b-config\") pod \"dnsmasq-dns-57d769cc4f-sbwr6\" (UID: \"ba772372-7abb-4468-985f-769cf2ece70b\") " pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.869597 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba772372-7abb-4468-985f-769cf2ece70b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-sbwr6\" (UID: \"ba772372-7abb-4468-985f-769cf2ece70b\") " pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.972666 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba772372-7abb-4468-985f-769cf2ece70b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-sbwr6\" (UID: \"ba772372-7abb-4468-985f-769cf2ece70b\") " pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.972751 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdblr\" (UniqueName: \"kubernetes.io/projected/ba772372-7abb-4468-985f-769cf2ece70b-kube-api-access-pdblr\") pod \"dnsmasq-dns-57d769cc4f-sbwr6\" (UID: \"ba772372-7abb-4468-985f-769cf2ece70b\") " pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.972851 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba772372-7abb-4468-985f-769cf2ece70b-config\") pod \"dnsmasq-dns-57d769cc4f-sbwr6\" (UID: \"ba772372-7abb-4468-985f-769cf2ece70b\") " pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.973746 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba772372-7abb-4468-985f-769cf2ece70b-config\") pod \"dnsmasq-dns-57d769cc4f-sbwr6\" (UID: \"ba772372-7abb-4468-985f-769cf2ece70b\") " pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.974378 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba772372-7abb-4468-985f-769cf2ece70b-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-sbwr6\" (UID: \"ba772372-7abb-4468-985f-769cf2ece70b\") " pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" Nov 25 15:11:43 crc kubenswrapper[4731]: I1125 15:11:43.996003 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdblr\" (UniqueName: \"kubernetes.io/projected/ba772372-7abb-4468-985f-769cf2ece70b-kube-api-access-pdblr\") pod \"dnsmasq-dns-57d769cc4f-sbwr6\" (UID: \"ba772372-7abb-4468-985f-769cf2ece70b\") " pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.174168 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.187326 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dmp6l"] Nov 25 15:11:44 crc kubenswrapper[4731]: W1125 15:11:44.200461 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42edf0e1_7def_4b40_8b9f_16601761c5bf.slice/crio-6dd39291d4372f1ded8e61bf131394f9fc1bc51b8d4dfc409b6385ad8e4d7700 WatchSource:0}: Error finding container 6dd39291d4372f1ded8e61bf131394f9fc1bc51b8d4dfc409b6385ad8e4d7700: Status 404 returned error can't find the container with id 6dd39291d4372f1ded8e61bf131394f9fc1bc51b8d4dfc409b6385ad8e4d7700 Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.580556 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" event={"ID":"42edf0e1-7def-4b40-8b9f-16601761c5bf","Type":"ContainerStarted","Data":"6dd39291d4372f1ded8e61bf131394f9fc1bc51b8d4dfc409b6385ad8e4d7700"} Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.676226 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.677905 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.686896 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-r6w67" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.695324 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.695584 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.695653 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.695734 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.695925 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.704550 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.706574 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.790323 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-config-data\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.790406 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnlzr\" (UniqueName: \"kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-kube-api-access-mnlzr\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.790437 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.790498 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.790524 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.790555 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.790582 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.790604 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.790638 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.790673 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f9c970e0-c27c-4a82-afd5-97aacf7536f9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.790700 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f9c970e0-c27c-4a82-afd5-97aacf7536f9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.799155 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sbwr6"] Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.891792 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-config-data\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.891892 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnlzr\" (UniqueName: \"kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-kube-api-access-mnlzr\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.891923 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.891960 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.891986 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.892016 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.892041 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.892061 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.892093 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.892131 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f9c970e0-c27c-4a82-afd5-97aacf7536f9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.892156 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f9c970e0-c27c-4a82-afd5-97aacf7536f9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.893318 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.893369 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-config-data\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.893873 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.894181 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.894273 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.894980 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.899889 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f9c970e0-c27c-4a82-afd5-97aacf7536f9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.907800 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f9c970e0-c27c-4a82-afd5-97aacf7536f9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.908411 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.911100 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnlzr\" (UniqueName: \"kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-kube-api-access-mnlzr\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.916360 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.925596 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " pod="openstack/rabbitmq-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.970086 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.974416 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.978370 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.978453 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.978470 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.978559 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hmmsg" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.978729 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.978758 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 15:11:44 crc kubenswrapper[4731]: I1125 15:11:44.979035 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.007635 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.078204 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.094738 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.094779 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.094809 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.094848 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.094871 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.094892 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdjv8\" (UniqueName: \"kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-kube-api-access-gdjv8\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.094912 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.094934 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.094949 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.094966 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.095002 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.196434 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.196487 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.196515 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.196540 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.196559 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.196589 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdjv8\" (UniqueName: \"kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-kube-api-access-gdjv8\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.196611 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.197000 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.197020 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.197035 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.197076 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.198134 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.198409 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.199161 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.199398 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.199490 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.201700 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.206101 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.206690 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.207329 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.207601 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.216719 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdjv8\" (UniqueName: \"kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-kube-api-access-gdjv8\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.240759 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.301911 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.634499 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" event={"ID":"ba772372-7abb-4468-985f-769cf2ece70b","Type":"ContainerStarted","Data":"8bab5deca7373f46f3ba6502728ed7368e34d2b25adb7f4f6513be238edc7c7c"} Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.652904 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:11:45 crc kubenswrapper[4731]: W1125 15:11:45.733607 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf9c970e0_c27c_4a82_afd5_97aacf7536f9.slice/crio-54d79d86da21b9d0bda50ce35d49acb98e0d88148d6261692ad1bb332e58333c WatchSource:0}: Error finding container 54d79d86da21b9d0bda50ce35d49acb98e0d88148d6261692ad1bb332e58333c: Status 404 returned error can't find the container with id 54d79d86da21b9d0bda50ce35d49acb98e0d88148d6261692ad1bb332e58333c Nov 25 15:11:45 crc kubenswrapper[4731]: I1125 15:11:45.893839 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:11:45 crc kubenswrapper[4731]: W1125 15:11:45.909180 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08b3ef48_58e1_4da1_a3ff_ecff7f3b19e8.slice/crio-5c568335f47f64342823a07158b91e5ec364d5802789814de7c3688c1af93ef7 WatchSource:0}: Error finding container 5c568335f47f64342823a07158b91e5ec364d5802789814de7c3688c1af93ef7: Status 404 returned error can't find the container with id 5c568335f47f64342823a07158b91e5ec364d5802789814de7c3688c1af93ef7 Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.558188 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.560037 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.564432 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ddlwp" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.565241 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.565275 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.565508 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.570944 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.598123 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.681991 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f9c970e0-c27c-4a82-afd5-97aacf7536f9","Type":"ContainerStarted","Data":"54d79d86da21b9d0bda50ce35d49acb98e0d88148d6261692ad1bb332e58333c"} Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.691327 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8","Type":"ContainerStarted","Data":"5c568335f47f64342823a07158b91e5ec364d5802789814de7c3688c1af93ef7"} Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.751214 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c32b705-7078-46a9-9858-16932a2b2675-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.751326 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.751360 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c32b705-7078-46a9-9858-16932a2b2675-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.751382 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lw7x\" (UniqueName: \"kubernetes.io/projected/6c32b705-7078-46a9-9858-16932a2b2675-kube-api-access-2lw7x\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.751411 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c32b705-7078-46a9-9858-16932a2b2675-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.751438 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c32b705-7078-46a9-9858-16932a2b2675-config-data-default\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.751466 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c32b705-7078-46a9-9858-16932a2b2675-kolla-config\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.751492 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c32b705-7078-46a9-9858-16932a2b2675-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.853500 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c32b705-7078-46a9-9858-16932a2b2675-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.853579 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c32b705-7078-46a9-9858-16932a2b2675-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.853653 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.853683 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c32b705-7078-46a9-9858-16932a2b2675-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.853706 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lw7x\" (UniqueName: \"kubernetes.io/projected/6c32b705-7078-46a9-9858-16932a2b2675-kube-api-access-2lw7x\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.853729 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c32b705-7078-46a9-9858-16932a2b2675-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.853757 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c32b705-7078-46a9-9858-16932a2b2675-config-data-default\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.853791 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c32b705-7078-46a9-9858-16932a2b2675-kolla-config\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.854759 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6c32b705-7078-46a9-9858-16932a2b2675-kolla-config\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.855123 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.855483 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6c32b705-7078-46a9-9858-16932a2b2675-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.856059 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6c32b705-7078-46a9-9858-16932a2b2675-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.856611 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6c32b705-7078-46a9-9858-16932a2b2675-config-data-default\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.868776 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c32b705-7078-46a9-9858-16932a2b2675-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.879337 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c32b705-7078-46a9-9858-16932a2b2675-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.885690 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lw7x\" (UniqueName: \"kubernetes.io/projected/6c32b705-7078-46a9-9858-16932a2b2675-kube-api-access-2lw7x\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:46 crc kubenswrapper[4731]: I1125 15:11:46.920436 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"6c32b705-7078-46a9-9858-16932a2b2675\") " pod="openstack/openstack-galera-0" Nov 25 15:11:47 crc kubenswrapper[4731]: I1125 15:11:47.200195 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Nov 25 15:11:47 crc kubenswrapper[4731]: I1125 15:11:47.960434 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 15:11:47 crc kubenswrapper[4731]: I1125 15:11:47.966566 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:47 crc kubenswrapper[4731]: I1125 15:11:47.969563 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Nov 25 15:11:47 crc kubenswrapper[4731]: I1125 15:11:47.969808 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-n2nxx" Nov 25 15:11:47 crc kubenswrapper[4731]: I1125 15:11:47.970140 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Nov 25 15:11:47 crc kubenswrapper[4731]: I1125 15:11:47.970525 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.006481 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.082539 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.082616 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/600ccfb1-2789-4b20-8012-9b6bdbf836ff-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.082650 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/600ccfb1-2789-4b20-8012-9b6bdbf836ff-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.082682 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/600ccfb1-2789-4b20-8012-9b6bdbf836ff-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.082936 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xthr6\" (UniqueName: \"kubernetes.io/projected/600ccfb1-2789-4b20-8012-9b6bdbf836ff-kube-api-access-xthr6\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.083005 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/600ccfb1-2789-4b20-8012-9b6bdbf836ff-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.083101 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/600ccfb1-2789-4b20-8012-9b6bdbf836ff-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.083146 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/600ccfb1-2789-4b20-8012-9b6bdbf836ff-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.184555 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xthr6\" (UniqueName: \"kubernetes.io/projected/600ccfb1-2789-4b20-8012-9b6bdbf836ff-kube-api-access-xthr6\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.184612 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/600ccfb1-2789-4b20-8012-9b6bdbf836ff-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.184646 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/600ccfb1-2789-4b20-8012-9b6bdbf836ff-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.184678 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/600ccfb1-2789-4b20-8012-9b6bdbf836ff-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.184725 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.184753 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/600ccfb1-2789-4b20-8012-9b6bdbf836ff-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.184771 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/600ccfb1-2789-4b20-8012-9b6bdbf836ff-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.184797 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/600ccfb1-2789-4b20-8012-9b6bdbf836ff-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.185274 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/600ccfb1-2789-4b20-8012-9b6bdbf836ff-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.186105 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/600ccfb1-2789-4b20-8012-9b6bdbf836ff-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.186227 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.189375 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/600ccfb1-2789-4b20-8012-9b6bdbf836ff-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.189549 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/600ccfb1-2789-4b20-8012-9b6bdbf836ff-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.191396 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/600ccfb1-2789-4b20-8012-9b6bdbf836ff-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.194045 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/600ccfb1-2789-4b20-8012-9b6bdbf836ff-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.205266 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xthr6\" (UniqueName: \"kubernetes.io/projected/600ccfb1-2789-4b20-8012-9b6bdbf836ff-kube-api-access-xthr6\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.214412 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"openstack-cell1-galera-0\" (UID: \"600ccfb1-2789-4b20-8012-9b6bdbf836ff\") " pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.308926 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.356753 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.359178 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.366915 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.367032 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-bjvjf" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.367247 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.374519 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.520214 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e42be3-60a3-434a-92b2-6692d4316991-memcached-tls-certs\") pod \"memcached-0\" (UID: \"29e42be3-60a3-434a-92b2-6692d4316991\") " pod="openstack/memcached-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.520345 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29e42be3-60a3-434a-92b2-6692d4316991-config-data\") pod \"memcached-0\" (UID: \"29e42be3-60a3-434a-92b2-6692d4316991\") " pod="openstack/memcached-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.520426 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e42be3-60a3-434a-92b2-6692d4316991-combined-ca-bundle\") pod \"memcached-0\" (UID: \"29e42be3-60a3-434a-92b2-6692d4316991\") " pod="openstack/memcached-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.520480 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbhbj\" (UniqueName: \"kubernetes.io/projected/29e42be3-60a3-434a-92b2-6692d4316991-kube-api-access-pbhbj\") pod \"memcached-0\" (UID: \"29e42be3-60a3-434a-92b2-6692d4316991\") " pod="openstack/memcached-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.520753 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29e42be3-60a3-434a-92b2-6692d4316991-kolla-config\") pod \"memcached-0\" (UID: \"29e42be3-60a3-434a-92b2-6692d4316991\") " pod="openstack/memcached-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.622234 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29e42be3-60a3-434a-92b2-6692d4316991-kolla-config\") pod \"memcached-0\" (UID: \"29e42be3-60a3-434a-92b2-6692d4316991\") " pod="openstack/memcached-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.622321 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e42be3-60a3-434a-92b2-6692d4316991-memcached-tls-certs\") pod \"memcached-0\" (UID: \"29e42be3-60a3-434a-92b2-6692d4316991\") " pod="openstack/memcached-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.622348 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29e42be3-60a3-434a-92b2-6692d4316991-config-data\") pod \"memcached-0\" (UID: \"29e42be3-60a3-434a-92b2-6692d4316991\") " pod="openstack/memcached-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.622381 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e42be3-60a3-434a-92b2-6692d4316991-combined-ca-bundle\") pod \"memcached-0\" (UID: \"29e42be3-60a3-434a-92b2-6692d4316991\") " pod="openstack/memcached-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.622410 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbhbj\" (UniqueName: \"kubernetes.io/projected/29e42be3-60a3-434a-92b2-6692d4316991-kube-api-access-pbhbj\") pod \"memcached-0\" (UID: \"29e42be3-60a3-434a-92b2-6692d4316991\") " pod="openstack/memcached-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.623562 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/29e42be3-60a3-434a-92b2-6692d4316991-config-data\") pod \"memcached-0\" (UID: \"29e42be3-60a3-434a-92b2-6692d4316991\") " pod="openstack/memcached-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.623584 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/29e42be3-60a3-434a-92b2-6692d4316991-kolla-config\") pod \"memcached-0\" (UID: \"29e42be3-60a3-434a-92b2-6692d4316991\") " pod="openstack/memcached-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.627991 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e42be3-60a3-434a-92b2-6692d4316991-memcached-tls-certs\") pod \"memcached-0\" (UID: \"29e42be3-60a3-434a-92b2-6692d4316991\") " pod="openstack/memcached-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.628201 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e42be3-60a3-434a-92b2-6692d4316991-combined-ca-bundle\") pod \"memcached-0\" (UID: \"29e42be3-60a3-434a-92b2-6692d4316991\") " pod="openstack/memcached-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.648381 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbhbj\" (UniqueName: \"kubernetes.io/projected/29e42be3-60a3-434a-92b2-6692d4316991-kube-api-access-pbhbj\") pod \"memcached-0\" (UID: \"29e42be3-60a3-434a-92b2-6692d4316991\") " pod="openstack/memcached-0" Nov 25 15:11:48 crc kubenswrapper[4731]: I1125 15:11:48.681297 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Nov 25 15:11:50 crc kubenswrapper[4731]: I1125 15:11:50.381890 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:11:50 crc kubenswrapper[4731]: I1125 15:11:50.385536 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 15:11:50 crc kubenswrapper[4731]: I1125 15:11:50.391737 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-gjqs5" Nov 25 15:11:50 crc kubenswrapper[4731]: I1125 15:11:50.396308 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:11:50 crc kubenswrapper[4731]: I1125 15:11:50.454979 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c427c\" (UniqueName: \"kubernetes.io/projected/7142ee70-31f5-466a-8772-d4e14f8986ee-kube-api-access-c427c\") pod \"kube-state-metrics-0\" (UID: \"7142ee70-31f5-466a-8772-d4e14f8986ee\") " pod="openstack/kube-state-metrics-0" Nov 25 15:11:50 crc kubenswrapper[4731]: I1125 15:11:50.556156 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c427c\" (UniqueName: \"kubernetes.io/projected/7142ee70-31f5-466a-8772-d4e14f8986ee-kube-api-access-c427c\") pod \"kube-state-metrics-0\" (UID: \"7142ee70-31f5-466a-8772-d4e14f8986ee\") " pod="openstack/kube-state-metrics-0" Nov 25 15:11:50 crc kubenswrapper[4731]: I1125 15:11:50.604931 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c427c\" (UniqueName: \"kubernetes.io/projected/7142ee70-31f5-466a-8772-d4e14f8986ee-kube-api-access-c427c\") pod \"kube-state-metrics-0\" (UID: \"7142ee70-31f5-466a-8772-d4e14f8986ee\") " pod="openstack/kube-state-metrics-0" Nov 25 15:11:50 crc kubenswrapper[4731]: I1125 15:11:50.711190 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.825576 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-crvzg"] Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.827057 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crvzg" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.830226 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.830860 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.831055 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-mkbbj" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.845512 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-w2gw5"] Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.848438 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.860324 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crvzg"] Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.874391 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-w2gw5"] Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.923728 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5881193a-868d-4633-ae6d-0428aa755057-scripts\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.923900 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5881193a-868d-4633-ae6d-0428aa755057-var-run\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.923936 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b83f089-05d3-4362-b02c-ba4293807c86-scripts\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.924026 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0b83f089-05d3-4362-b02c-ba4293807c86-var-lib\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.924051 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnb7p\" (UniqueName: \"kubernetes.io/projected/5881193a-868d-4633-ae6d-0428aa755057-kube-api-access-fnb7p\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.924197 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5881193a-868d-4633-ae6d-0428aa755057-combined-ca-bundle\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.924811 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5881193a-868d-4633-ae6d-0428aa755057-var-run-ovn\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.925158 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0b83f089-05d3-4362-b02c-ba4293807c86-var-run\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.925242 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7tkxq\" (UniqueName: \"kubernetes.io/projected/0b83f089-05d3-4362-b02c-ba4293807c86-kube-api-access-7tkxq\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.925302 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0b83f089-05d3-4362-b02c-ba4293807c86-var-log\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.925356 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5881193a-868d-4633-ae6d-0428aa755057-var-log-ovn\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.925709 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5881193a-868d-4633-ae6d-0428aa755057-ovn-controller-tls-certs\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:53 crc kubenswrapper[4731]: I1125 15:11:53.925794 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0b83f089-05d3-4362-b02c-ba4293807c86-etc-ovs\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.027620 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5881193a-868d-4633-ae6d-0428aa755057-ovn-controller-tls-certs\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.027668 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0b83f089-05d3-4362-b02c-ba4293807c86-etc-ovs\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.027715 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5881193a-868d-4633-ae6d-0428aa755057-scripts\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.027734 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5881193a-868d-4633-ae6d-0428aa755057-var-run\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.027756 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b83f089-05d3-4362-b02c-ba4293807c86-scripts\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.027793 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0b83f089-05d3-4362-b02c-ba4293807c86-var-lib\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.027828 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnb7p\" (UniqueName: \"kubernetes.io/projected/5881193a-868d-4633-ae6d-0428aa755057-kube-api-access-fnb7p\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.027876 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5881193a-868d-4633-ae6d-0428aa755057-combined-ca-bundle\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.027908 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5881193a-868d-4633-ae6d-0428aa755057-var-run-ovn\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.027929 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0b83f089-05d3-4362-b02c-ba4293807c86-var-run\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.027961 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7tkxq\" (UniqueName: \"kubernetes.io/projected/0b83f089-05d3-4362-b02c-ba4293807c86-kube-api-access-7tkxq\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.027985 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0b83f089-05d3-4362-b02c-ba4293807c86-var-log\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.028004 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5881193a-868d-4633-ae6d-0428aa755057-var-log-ovn\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.028470 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/0b83f089-05d3-4362-b02c-ba4293807c86-etc-ovs\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.028543 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/5881193a-868d-4633-ae6d-0428aa755057-var-log-ovn\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.028670 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/5881193a-868d-4633-ae6d-0428aa755057-var-run-ovn\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.028740 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/0b83f089-05d3-4362-b02c-ba4293807c86-var-run\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.029080 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/0b83f089-05d3-4362-b02c-ba4293807c86-var-log\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.030996 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5881193a-868d-4633-ae6d-0428aa755057-scripts\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.031084 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/0b83f089-05d3-4362-b02c-ba4293807c86-scripts\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.031146 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5881193a-868d-4633-ae6d-0428aa755057-var-run\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.031770 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/0b83f089-05d3-4362-b02c-ba4293807c86-var-lib\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.035787 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/5881193a-868d-4633-ae6d-0428aa755057-ovn-controller-tls-certs\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.041522 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5881193a-868d-4633-ae6d-0428aa755057-combined-ca-bundle\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.053971 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnb7p\" (UniqueName: \"kubernetes.io/projected/5881193a-868d-4633-ae6d-0428aa755057-kube-api-access-fnb7p\") pod \"ovn-controller-crvzg\" (UID: \"5881193a-868d-4633-ae6d-0428aa755057\") " pod="openstack/ovn-controller-crvzg" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.054176 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7tkxq\" (UniqueName: \"kubernetes.io/projected/0b83f089-05d3-4362-b02c-ba4293807c86-kube-api-access-7tkxq\") pod \"ovn-controller-ovs-w2gw5\" (UID: \"0b83f089-05d3-4362-b02c-ba4293807c86\") " pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.167154 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:11:54 crc kubenswrapper[4731]: I1125 15:11:54.167540 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crvzg" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.724800 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.726494 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.733902 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.769013 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-nzkxk" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.769234 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.769858 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.769976 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.773937 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.871264 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecb6adf-6993-4a45-a85a-4fa15a484be1-config\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.871328 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fecb6adf-6993-4a45-a85a-4fa15a484be1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.871404 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.871445 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fecb6adf-6993-4a45-a85a-4fa15a484be1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.871474 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-948zv\" (UniqueName: \"kubernetes.io/projected/fecb6adf-6993-4a45-a85a-4fa15a484be1-kube-api-access-948zv\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.871496 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fecb6adf-6993-4a45-a85a-4fa15a484be1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.871568 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fecb6adf-6993-4a45-a85a-4fa15a484be1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.871595 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fecb6adf-6993-4a45-a85a-4fa15a484be1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.973677 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fecb6adf-6993-4a45-a85a-4fa15a484be1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.973786 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecb6adf-6993-4a45-a85a-4fa15a484be1-config\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.973851 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fecb6adf-6993-4a45-a85a-4fa15a484be1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.973926 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.973964 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fecb6adf-6993-4a45-a85a-4fa15a484be1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.973987 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-948zv\" (UniqueName: \"kubernetes.io/projected/fecb6adf-6993-4a45-a85a-4fa15a484be1-kube-api-access-948zv\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.974010 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fecb6adf-6993-4a45-a85a-4fa15a484be1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.974070 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fecb6adf-6993-4a45-a85a-4fa15a484be1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.976032 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.976687 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fecb6adf-6993-4a45-a85a-4fa15a484be1-config\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.976751 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fecb6adf-6993-4a45-a85a-4fa15a484be1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.978032 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fecb6adf-6993-4a45-a85a-4fa15a484be1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.980239 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fecb6adf-6993-4a45-a85a-4fa15a484be1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.980311 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/fecb6adf-6993-4a45-a85a-4fa15a484be1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:55 crc kubenswrapper[4731]: I1125 15:11:55.986206 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fecb6adf-6993-4a45-a85a-4fa15a484be1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:56 crc kubenswrapper[4731]: I1125 15:11:56.003566 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-948zv\" (UniqueName: \"kubernetes.io/projected/fecb6adf-6993-4a45-a85a-4fa15a484be1-kube-api-access-948zv\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:56 crc kubenswrapper[4731]: I1125 15:11:56.014094 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"ovsdbserver-nb-0\" (UID: \"fecb6adf-6993-4a45-a85a-4fa15a484be1\") " pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:56 crc kubenswrapper[4731]: I1125 15:11:56.102454 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.048495 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.051363 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.054197 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.055218 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.055490 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.055585 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-tmflm" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.068237 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.118352 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.118437 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzlzc\" (UniqueName: \"kubernetes.io/projected/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-kube-api-access-dzlzc\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.118491 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.118531 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.118561 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.118601 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.118633 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-config\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.118669 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.219997 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.220067 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzlzc\" (UniqueName: \"kubernetes.io/projected/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-kube-api-access-dzlzc\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.220105 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.220138 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.220172 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.220209 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.220235 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-config\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.220254 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.220485 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.220770 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.223964 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.224378 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-config\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.232245 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.234323 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.235189 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.238161 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzlzc\" (UniqueName: \"kubernetes.io/projected/1b455e72-bfa2-4a6f-9b54-8424af4e4c53-kube-api-access-dzlzc\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.249772 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"1b455e72-bfa2-4a6f-9b54-8424af4e4c53\") " pod="openstack/ovsdbserver-sb-0" Nov 25 15:11:58 crc kubenswrapper[4731]: I1125 15:11:58.382261 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:09 crc kubenswrapper[4731]: E1125 15:12:09.853935 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 25 15:12:09 crc kubenswrapper[4731]: E1125 15:12:09.855175 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdjv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:12:09 crc kubenswrapper[4731]: E1125 15:12:09.856673 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" Nov 25 15:12:09 crc kubenswrapper[4731]: E1125 15:12:09.872038 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Nov 25 15:12:09 crc kubenswrapper[4731]: E1125 15:12:09.872224 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mnlzr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(f9c970e0-c27c-4a82-afd5-97aacf7536f9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:12:09 crc kubenswrapper[4731]: E1125 15:12:09.873419 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="f9c970e0-c27c-4a82-afd5-97aacf7536f9" Nov 25 15:12:09 crc kubenswrapper[4731]: E1125 15:12:09.917005 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" Nov 25 15:12:09 crc kubenswrapper[4731]: E1125 15:12:09.917400 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="f9c970e0-c27c-4a82-afd5-97aacf7536f9" Nov 25 15:12:10 crc kubenswrapper[4731]: E1125 15:12:10.590123 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 15:12:10 crc kubenswrapper[4731]: E1125 15:12:10.591162 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7h8wc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-md65g_openstack(f0dddbe1-6611-4f1b-ae63-36a1914febdf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:12:10 crc kubenswrapper[4731]: E1125 15:12:10.592643 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-md65g" podUID="f0dddbe1-6611-4f1b-ae63-36a1914febdf" Nov 25 15:12:10 crc kubenswrapper[4731]: E1125 15:12:10.611981 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 15:12:10 crc kubenswrapper[4731]: E1125 15:12:10.612178 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hszzl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-nbfn9_openstack(6fdb6de7-8020-4b94-b4f1-97b4bdc133c7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:12:10 crc kubenswrapper[4731]: E1125 15:12:10.613577 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-nbfn9" podUID="6fdb6de7-8020-4b94-b4f1-97b4bdc133c7" Nov 25 15:12:10 crc kubenswrapper[4731]: E1125 15:12:10.620400 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 15:12:10 crc kubenswrapper[4731]: E1125 15:12:10.620541 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdblr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-sbwr6_openstack(ba772372-7abb-4468-985f-769cf2ece70b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:12:10 crc kubenswrapper[4731]: E1125 15:12:10.621970 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" podUID="ba772372-7abb-4468-985f-769cf2ece70b" Nov 25 15:12:10 crc kubenswrapper[4731]: E1125 15:12:10.657024 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Nov 25 15:12:10 crc kubenswrapper[4731]: E1125 15:12:10.657440 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xhjtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-dmp6l_openstack(42edf0e1-7def-4b40-8b9f-16601761c5bf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:12:10 crc kubenswrapper[4731]: E1125 15:12:10.659333 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" podUID="42edf0e1-7def-4b40-8b9f-16601761c5bf" Nov 25 15:12:10 crc kubenswrapper[4731]: E1125 15:12:10.923186 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" podUID="ba772372-7abb-4468-985f-769cf2ece70b" Nov 25 15:12:10 crc kubenswrapper[4731]: E1125 15:12:10.926097 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" podUID="42edf0e1-7def-4b40-8b9f-16601761c5bf" Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.037361 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.425486 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crvzg"] Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.439765 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.446204 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-md65g" Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.447666 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.455943 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-nbfn9" Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.483882 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Nov 25 15:12:11 crc kubenswrapper[4731]: W1125 15:12:11.555549 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b83f089_05d3_4362_b02c_ba4293807c86.slice/crio-8d730a58b1ce243af9612f59a079cda26b77d380ffa5d4e697809496fb0558b1 WatchSource:0}: Error finding container 8d730a58b1ce243af9612f59a079cda26b77d380ffa5d4e697809496fb0558b1: Status 404 returned error can't find the container with id 8d730a58b1ce243af9612f59a079cda26b77d380ffa5d4e697809496fb0558b1 Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.556884 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-w2gw5"] Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.610519 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0dddbe1-6611-4f1b-ae63-36a1914febdf-config\") pod \"f0dddbe1-6611-4f1b-ae63-36a1914febdf\" (UID: \"f0dddbe1-6611-4f1b-ae63-36a1914febdf\") " Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.610608 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fdb6de7-8020-4b94-b4f1-97b4bdc133c7-config\") pod \"6fdb6de7-8020-4b94-b4f1-97b4bdc133c7\" (UID: \"6fdb6de7-8020-4b94-b4f1-97b4bdc133c7\") " Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.610763 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0dddbe1-6611-4f1b-ae63-36a1914febdf-dns-svc\") pod \"f0dddbe1-6611-4f1b-ae63-36a1914febdf\" (UID: \"f0dddbe1-6611-4f1b-ae63-36a1914febdf\") " Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.610806 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hszzl\" (UniqueName: \"kubernetes.io/projected/6fdb6de7-8020-4b94-b4f1-97b4bdc133c7-kube-api-access-hszzl\") pod \"6fdb6de7-8020-4b94-b4f1-97b4bdc133c7\" (UID: \"6fdb6de7-8020-4b94-b4f1-97b4bdc133c7\") " Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.610887 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h8wc\" (UniqueName: \"kubernetes.io/projected/f0dddbe1-6611-4f1b-ae63-36a1914febdf-kube-api-access-7h8wc\") pod \"f0dddbe1-6611-4f1b-ae63-36a1914febdf\" (UID: \"f0dddbe1-6611-4f1b-ae63-36a1914febdf\") " Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.611361 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0dddbe1-6611-4f1b-ae63-36a1914febdf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0dddbe1-6611-4f1b-ae63-36a1914febdf" (UID: "f0dddbe1-6611-4f1b-ae63-36a1914febdf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.611513 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fdb6de7-8020-4b94-b4f1-97b4bdc133c7-config" (OuterVolumeSpecName: "config") pod "6fdb6de7-8020-4b94-b4f1-97b4bdc133c7" (UID: "6fdb6de7-8020-4b94-b4f1-97b4bdc133c7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.612101 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0dddbe1-6611-4f1b-ae63-36a1914febdf-config" (OuterVolumeSpecName: "config") pod "f0dddbe1-6611-4f1b-ae63-36a1914febdf" (UID: "f0dddbe1-6611-4f1b-ae63-36a1914febdf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.616977 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0dddbe1-6611-4f1b-ae63-36a1914febdf-kube-api-access-7h8wc" (OuterVolumeSpecName: "kube-api-access-7h8wc") pod "f0dddbe1-6611-4f1b-ae63-36a1914febdf" (UID: "f0dddbe1-6611-4f1b-ae63-36a1914febdf"). InnerVolumeSpecName "kube-api-access-7h8wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.617032 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fdb6de7-8020-4b94-b4f1-97b4bdc133c7-kube-api-access-hszzl" (OuterVolumeSpecName: "kube-api-access-hszzl") pod "6fdb6de7-8020-4b94-b4f1-97b4bdc133c7" (UID: "6fdb6de7-8020-4b94-b4f1-97b4bdc133c7"). InnerVolumeSpecName "kube-api-access-hszzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.713107 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0dddbe1-6611-4f1b-ae63-36a1914febdf-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.713142 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fdb6de7-8020-4b94-b4f1-97b4bdc133c7-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.713153 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0dddbe1-6611-4f1b-ae63-36a1914febdf-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.713163 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hszzl\" (UniqueName: \"kubernetes.io/projected/6fdb6de7-8020-4b94-b4f1-97b4bdc133c7-kube-api-access-hszzl\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.713176 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h8wc\" (UniqueName: \"kubernetes.io/projected/f0dddbe1-6611-4f1b-ae63-36a1914febdf-kube-api-access-7h8wc\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.929481 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7142ee70-31f5-466a-8772-d4e14f8986ee","Type":"ContainerStarted","Data":"49ed0ac0a8177fbb9e74d586c21aa0fcb665890d6cd9c62dcfe7d5d339b0818c"} Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.930337 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-md65g" event={"ID":"f0dddbe1-6611-4f1b-ae63-36a1914febdf","Type":"ContainerDied","Data":"5268576b071807025ec056482da9715e86b7eddc331e3e703b3c8c2ccec6105d"} Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.930388 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-md65g" Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.933907 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w2gw5" event={"ID":"0b83f089-05d3-4362-b02c-ba4293807c86","Type":"ContainerStarted","Data":"8d730a58b1ce243af9612f59a079cda26b77d380ffa5d4e697809496fb0558b1"} Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.935771 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crvzg" event={"ID":"5881193a-868d-4633-ae6d-0428aa755057","Type":"ContainerStarted","Data":"197cb10637f3cd87040af5d9be1c8b787a3efe3a47e50baa904575a2e93ee63b"} Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.937021 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c32b705-7078-46a9-9858-16932a2b2675","Type":"ContainerStarted","Data":"b5bfae9ead91733a6b29f7796434688cc9a05c4b323ca7a4779565975ceb13b9"} Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.939757 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-nbfn9" event={"ID":"6fdb6de7-8020-4b94-b4f1-97b4bdc133c7","Type":"ContainerDied","Data":"3fde5dd022b138d2982c8986396ff7929a85a2945bc7dbfb51fda15322ed66cb"} Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.939846 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-nbfn9" Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.944970 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"600ccfb1-2789-4b20-8012-9b6bdbf836ff","Type":"ContainerStarted","Data":"f2e2cb03a2ec4e8ce3a00941aa7edd503c6833e70e02cbf65ce8579abb33832e"} Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.948004 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"29e42be3-60a3-434a-92b2-6692d4316991","Type":"ContainerStarted","Data":"d5c6d339298536d888dd6ebd817539c7393176d46512cc13a7a8fbdb218d2fa3"} Nov 25 15:12:11 crc kubenswrapper[4731]: I1125 15:12:11.997536 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-md65g"] Nov 25 15:12:12 crc kubenswrapper[4731]: I1125 15:12:12.000195 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-md65g"] Nov 25 15:12:12 crc kubenswrapper[4731]: I1125 15:12:12.056015 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-nbfn9"] Nov 25 15:12:12 crc kubenswrapper[4731]: I1125 15:12:12.063258 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-nbfn9"] Nov 25 15:12:12 crc kubenswrapper[4731]: I1125 15:12:12.324148 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Nov 25 15:12:12 crc kubenswrapper[4731]: I1125 15:12:12.824786 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fdb6de7-8020-4b94-b4f1-97b4bdc133c7" path="/var/lib/kubelet/pods/6fdb6de7-8020-4b94-b4f1-97b4bdc133c7/volumes" Nov 25 15:12:12 crc kubenswrapper[4731]: I1125 15:12:12.825765 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0dddbe1-6611-4f1b-ae63-36a1914febdf" path="/var/lib/kubelet/pods/f0dddbe1-6611-4f1b-ae63-36a1914febdf/volumes" Nov 25 15:12:12 crc kubenswrapper[4731]: I1125 15:12:12.960771 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1b455e72-bfa2-4a6f-9b54-8424af4e4c53","Type":"ContainerStarted","Data":"78ccfe00402d576aa0ce65efefd23a9dafe632c89728e0e800a64fe9e897245f"} Nov 25 15:12:13 crc kubenswrapper[4731]: I1125 15:12:13.195591 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Nov 25 15:12:13 crc kubenswrapper[4731]: W1125 15:12:13.930175 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfecb6adf_6993_4a45_a85a_4fa15a484be1.slice/crio-80ca7ceb3a5a015e8a79ae17231bfb56efb1c2aacb0d1e0b580bd326757ba12e WatchSource:0}: Error finding container 80ca7ceb3a5a015e8a79ae17231bfb56efb1c2aacb0d1e0b580bd326757ba12e: Status 404 returned error can't find the container with id 80ca7ceb3a5a015e8a79ae17231bfb56efb1c2aacb0d1e0b580bd326757ba12e Nov 25 15:12:13 crc kubenswrapper[4731]: I1125 15:12:13.967849 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"fecb6adf-6993-4a45-a85a-4fa15a484be1","Type":"ContainerStarted","Data":"80ca7ceb3a5a015e8a79ae17231bfb56efb1c2aacb0d1e0b580bd326757ba12e"} Nov 25 15:12:20 crc kubenswrapper[4731]: I1125 15:12:20.029204 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1b455e72-bfa2-4a6f-9b54-8424af4e4c53","Type":"ContainerStarted","Data":"64668c7e5c9534c42281937487307812d98b73a763e0a195b865c6900f5e4b21"} Nov 25 15:12:20 crc kubenswrapper[4731]: I1125 15:12:20.032172 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"fecb6adf-6993-4a45-a85a-4fa15a484be1","Type":"ContainerStarted","Data":"04adde9405d65fa8c3f47350869f8e7c205eaf00d6d7983ec32be6a755f259d9"} Nov 25 15:12:20 crc kubenswrapper[4731]: I1125 15:12:20.034642 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c32b705-7078-46a9-9858-16932a2b2675","Type":"ContainerStarted","Data":"6af0cae137584674e32e0ae0a9dc12fda760ab572a2a05504153ce1803ca4ecc"} Nov 25 15:12:20 crc kubenswrapper[4731]: I1125 15:12:20.038863 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"600ccfb1-2789-4b20-8012-9b6bdbf836ff","Type":"ContainerStarted","Data":"68ff313ede08eea2a15803a35aaee7ea071b95ae1389b566893f61e7c98215cf"} Nov 25 15:12:20 crc kubenswrapper[4731]: I1125 15:12:20.041745 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"29e42be3-60a3-434a-92b2-6692d4316991","Type":"ContainerStarted","Data":"880870a0834dc8248fc6e2dc59489ae8eb7128622743b8d0d4928a642f01838f"} Nov 25 15:12:20 crc kubenswrapper[4731]: I1125 15:12:20.041830 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Nov 25 15:12:20 crc kubenswrapper[4731]: I1125 15:12:20.045374 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7142ee70-31f5-466a-8772-d4e14f8986ee","Type":"ContainerStarted","Data":"f6c1670ab91512ffec7d9ee0439365a0b2385bf092e65e2f5c56860ec740dbd9"} Nov 25 15:12:20 crc kubenswrapper[4731]: I1125 15:12:20.045522 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 15:12:20 crc kubenswrapper[4731]: I1125 15:12:20.047706 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crvzg" event={"ID":"5881193a-868d-4633-ae6d-0428aa755057","Type":"ContainerStarted","Data":"2e96561d46259d334b100fa262dd9693acd42db69a7c7023c587ffd807523247"} Nov 25 15:12:20 crc kubenswrapper[4731]: I1125 15:12:20.048058 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-crvzg" Nov 25 15:12:20 crc kubenswrapper[4731]: I1125 15:12:20.050065 4731 generic.go:334] "Generic (PLEG): container finished" podID="0b83f089-05d3-4362-b02c-ba4293807c86" containerID="2d028f36f3516d3eea3ea2a840ee38e484443bd69002bbcb2f05cbddb5117f69" exitCode=0 Nov 25 15:12:20 crc kubenswrapper[4731]: I1125 15:12:20.050117 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w2gw5" event={"ID":"0b83f089-05d3-4362-b02c-ba4293807c86","Type":"ContainerDied","Data":"2d028f36f3516d3eea3ea2a840ee38e484443bd69002bbcb2f05cbddb5117f69"} Nov 25 15:12:20 crc kubenswrapper[4731]: I1125 15:12:20.104985 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=22.168812599 podStartE2EDuration="30.10496726s" podCreationTimestamp="2025-11-25 15:11:50 +0000 UTC" firstStartedPulling="2025-11-25 15:12:11.058523402 +0000 UTC m=+1089.805657832" lastFinishedPulling="2025-11-25 15:12:18.994678063 +0000 UTC m=+1097.741812493" observedRunningTime="2025-11-25 15:12:20.081788439 +0000 UTC m=+1098.828922869" watchObservedRunningTime="2025-11-25 15:12:20.10496726 +0000 UTC m=+1098.852101690" Nov 25 15:12:20 crc kubenswrapper[4731]: I1125 15:12:20.108280 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=25.161943883 podStartE2EDuration="32.108261323s" podCreationTimestamp="2025-11-25 15:11:48 +0000 UTC" firstStartedPulling="2025-11-25 15:12:11.497015696 +0000 UTC m=+1090.244150126" lastFinishedPulling="2025-11-25 15:12:18.443333136 +0000 UTC m=+1097.190467566" observedRunningTime="2025-11-25 15:12:20.102002766 +0000 UTC m=+1098.849137206" watchObservedRunningTime="2025-11-25 15:12:20.108261323 +0000 UTC m=+1098.855395753" Nov 25 15:12:20 crc kubenswrapper[4731]: I1125 15:12:20.176776 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-crvzg" podStartSLOduration=19.757015168 podStartE2EDuration="27.17675379s" podCreationTimestamp="2025-11-25 15:11:53 +0000 UTC" firstStartedPulling="2025-11-25 15:12:11.495548249 +0000 UTC m=+1090.242682679" lastFinishedPulling="2025-11-25 15:12:18.915286871 +0000 UTC m=+1097.662421301" observedRunningTime="2025-11-25 15:12:20.172149874 +0000 UTC m=+1098.919284304" watchObservedRunningTime="2025-11-25 15:12:20.17675379 +0000 UTC m=+1098.923888220" Nov 25 15:12:21 crc kubenswrapper[4731]: I1125 15:12:21.069747 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w2gw5" event={"ID":"0b83f089-05d3-4362-b02c-ba4293807c86","Type":"ContainerStarted","Data":"e5fa90415ee4279f506a5cd14952d653851a6493939796b67632b2b300b5a0ab"} Nov 25 15:12:21 crc kubenswrapper[4731]: I1125 15:12:21.070467 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-w2gw5" event={"ID":"0b83f089-05d3-4362-b02c-ba4293807c86","Type":"ContainerStarted","Data":"15758552a0d23c92e661d2c443871d882a0d77ffb073b67b668a9b07c001e2f1"} Nov 25 15:12:21 crc kubenswrapper[4731]: I1125 15:12:21.096531 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-w2gw5" podStartSLOduration=21.208628492 podStartE2EDuration="28.096508935s" podCreationTimestamp="2025-11-25 15:11:53 +0000 UTC" firstStartedPulling="2025-11-25 15:12:11.55773542 +0000 UTC m=+1090.304869850" lastFinishedPulling="2025-11-25 15:12:18.445615863 +0000 UTC m=+1097.192750293" observedRunningTime="2025-11-25 15:12:21.0947205 +0000 UTC m=+1099.841854950" watchObservedRunningTime="2025-11-25 15:12:21.096508935 +0000 UTC m=+1099.843643365" Nov 25 15:12:22 crc kubenswrapper[4731]: I1125 15:12:22.079464 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:12:22 crc kubenswrapper[4731]: I1125 15:12:22.079530 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:12:23 crc kubenswrapper[4731]: I1125 15:12:23.093438 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"600ccfb1-2789-4b20-8012-9b6bdbf836ff","Type":"ContainerDied","Data":"68ff313ede08eea2a15803a35aaee7ea071b95ae1389b566893f61e7c98215cf"} Nov 25 15:12:23 crc kubenswrapper[4731]: I1125 15:12:23.093383 4731 generic.go:334] "Generic (PLEG): container finished" podID="600ccfb1-2789-4b20-8012-9b6bdbf836ff" containerID="68ff313ede08eea2a15803a35aaee7ea071b95ae1389b566893f61e7c98215cf" exitCode=0 Nov 25 15:12:23 crc kubenswrapper[4731]: I1125 15:12:23.095730 4731 generic.go:334] "Generic (PLEG): container finished" podID="6c32b705-7078-46a9-9858-16932a2b2675" containerID="6af0cae137584674e32e0ae0a9dc12fda760ab572a2a05504153ce1803ca4ecc" exitCode=0 Nov 25 15:12:23 crc kubenswrapper[4731]: I1125 15:12:23.095885 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c32b705-7078-46a9-9858-16932a2b2675","Type":"ContainerDied","Data":"6af0cae137584674e32e0ae0a9dc12fda760ab572a2a05504153ce1803ca4ecc"} Nov 25 15:12:24 crc kubenswrapper[4731]: I1125 15:12:24.117575 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" event={"ID":"42edf0e1-7def-4b40-8b9f-16601761c5bf","Type":"ContainerStarted","Data":"00b56dd0635be08180db3060204a4c0ee849da658f95acbef6f56a58db7203a4"} Nov 25 15:12:24 crc kubenswrapper[4731]: I1125 15:12:24.120262 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"1b455e72-bfa2-4a6f-9b54-8424af4e4c53","Type":"ContainerStarted","Data":"20dc2f6619368e346f8c8e5e10d906b18fe24dcfa3cac6bfa2edad2a8ccf5a06"} Nov 25 15:12:24 crc kubenswrapper[4731]: I1125 15:12:24.123205 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"fecb6adf-6993-4a45-a85a-4fa15a484be1","Type":"ContainerStarted","Data":"2d6feac48c2b4420afec7e5f8f941c8aabd36a1488ef6d264b340f85ba7edc09"} Nov 25 15:12:24 crc kubenswrapper[4731]: I1125 15:12:24.124674 4731 generic.go:334] "Generic (PLEG): container finished" podID="ba772372-7abb-4468-985f-769cf2ece70b" containerID="dfb7a653d9b0aed12a37e808f87152651b4cb9e6ffa1260c9ce1d3fcfaf1ff33" exitCode=0 Nov 25 15:12:24 crc kubenswrapper[4731]: I1125 15:12:24.124730 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" event={"ID":"ba772372-7abb-4468-985f-769cf2ece70b","Type":"ContainerDied","Data":"dfb7a653d9b0aed12a37e808f87152651b4cb9e6ffa1260c9ce1d3fcfaf1ff33"} Nov 25 15:12:24 crc kubenswrapper[4731]: I1125 15:12:24.126746 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6c32b705-7078-46a9-9858-16932a2b2675","Type":"ContainerStarted","Data":"cd91dca9a46b412bcf1154221ba613080104f3eff776d8f1e57c7d8c3dbe193c"} Nov 25 15:12:24 crc kubenswrapper[4731]: I1125 15:12:24.128634 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"600ccfb1-2789-4b20-8012-9b6bdbf836ff","Type":"ContainerStarted","Data":"a5d4f7d3d6a6b9161b2ce64b913c52a954cb7afa6c59fac204868465a230b1b5"} Nov 25 15:12:24 crc kubenswrapper[4731]: I1125 15:12:24.162915 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=16.678779455 podStartE2EDuration="27.162896277s" podCreationTimestamp="2025-11-25 15:11:57 +0000 UTC" firstStartedPulling="2025-11-25 15:12:12.485510134 +0000 UTC m=+1091.232644564" lastFinishedPulling="2025-11-25 15:12:22.969626956 +0000 UTC m=+1101.716761386" observedRunningTime="2025-11-25 15:12:24.161959014 +0000 UTC m=+1102.909093444" watchObservedRunningTime="2025-11-25 15:12:24.162896277 +0000 UTC m=+1102.910030717" Nov 25 15:12:24 crc kubenswrapper[4731]: I1125 15:12:24.189226 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=31.777571269 podStartE2EDuration="39.189210317s" podCreationTimestamp="2025-11-25 15:11:45 +0000 UTC" firstStartedPulling="2025-11-25 15:12:11.495493898 +0000 UTC m=+1090.242628328" lastFinishedPulling="2025-11-25 15:12:18.907132956 +0000 UTC m=+1097.654267376" observedRunningTime="2025-11-25 15:12:24.18335886 +0000 UTC m=+1102.930493300" watchObservedRunningTime="2025-11-25 15:12:24.189210317 +0000 UTC m=+1102.936344747" Nov 25 15:12:24 crc kubenswrapper[4731]: I1125 15:12:24.208075 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=30.795845656 podStartE2EDuration="38.208056409s" podCreationTimestamp="2025-11-25 15:11:46 +0000 UTC" firstStartedPulling="2025-11-25 15:12:11.495512138 +0000 UTC m=+1090.242646568" lastFinishedPulling="2025-11-25 15:12:18.907722881 +0000 UTC m=+1097.654857321" observedRunningTime="2025-11-25 15:12:24.201646968 +0000 UTC m=+1102.948781398" watchObservedRunningTime="2025-11-25 15:12:24.208056409 +0000 UTC m=+1102.955190839" Nov 25 15:12:24 crc kubenswrapper[4731]: I1125 15:12:24.228054 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=21.089927602 podStartE2EDuration="30.22803616s" podCreationTimestamp="2025-11-25 15:11:54 +0000 UTC" firstStartedPulling="2025-11-25 15:12:13.932569561 +0000 UTC m=+1092.679703991" lastFinishedPulling="2025-11-25 15:12:23.070678119 +0000 UTC m=+1101.817812549" observedRunningTime="2025-11-25 15:12:24.220313036 +0000 UTC m=+1102.967447466" watchObservedRunningTime="2025-11-25 15:12:24.22803616 +0000 UTC m=+1102.975170590" Nov 25 15:12:25 crc kubenswrapper[4731]: I1125 15:12:25.139005 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" event={"ID":"ba772372-7abb-4468-985f-769cf2ece70b","Type":"ContainerStarted","Data":"42e560f7f6a4e0f9ab28f351b9e4673464cdf162134df24280e8bb276d521836"} Nov 25 15:12:25 crc kubenswrapper[4731]: I1125 15:12:25.139553 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" Nov 25 15:12:25 crc kubenswrapper[4731]: I1125 15:12:25.140518 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f9c970e0-c27c-4a82-afd5-97aacf7536f9","Type":"ContainerStarted","Data":"a801c820f5511e4c5db7b4c69c0cacb99eb6775c13be64ca3267bfeeb186738b"} Nov 25 15:12:25 crc kubenswrapper[4731]: I1125 15:12:25.142428 4731 generic.go:334] "Generic (PLEG): container finished" podID="42edf0e1-7def-4b40-8b9f-16601761c5bf" containerID="00b56dd0635be08180db3060204a4c0ee849da658f95acbef6f56a58db7203a4" exitCode=0 Nov 25 15:12:25 crc kubenswrapper[4731]: I1125 15:12:25.142474 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" event={"ID":"42edf0e1-7def-4b40-8b9f-16601761c5bf","Type":"ContainerDied","Data":"00b56dd0635be08180db3060204a4c0ee849da658f95acbef6f56a58db7203a4"} Nov 25 15:12:25 crc kubenswrapper[4731]: I1125 15:12:25.144756 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8","Type":"ContainerStarted","Data":"749d7a65e9c7e0bf8844ce25e19f271ac52796eca22ae038cda0ede819d4c3f0"} Nov 25 15:12:25 crc kubenswrapper[4731]: I1125 15:12:25.161010 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" podStartSLOduration=3.633723603 podStartE2EDuration="42.160991956s" podCreationTimestamp="2025-11-25 15:11:43 +0000 UTC" firstStartedPulling="2025-11-25 15:11:44.831273922 +0000 UTC m=+1063.578408352" lastFinishedPulling="2025-11-25 15:12:23.358542275 +0000 UTC m=+1102.105676705" observedRunningTime="2025-11-25 15:12:25.155631722 +0000 UTC m=+1103.902766152" watchObservedRunningTime="2025-11-25 15:12:25.160991956 +0000 UTC m=+1103.908126396" Nov 25 15:12:25 crc kubenswrapper[4731]: I1125 15:12:25.382761 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:25 crc kubenswrapper[4731]: I1125 15:12:25.425415 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.102544 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.102634 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.143178 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.157500 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" event={"ID":"42edf0e1-7def-4b40-8b9f-16601761c5bf","Type":"ContainerStarted","Data":"e02fdcbeaba1fb4ae6a3f06519f1d980669d537440993706c27fc96737748eec"} Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.157850 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.191371 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" podStartSLOduration=-9223371993.66343 podStartE2EDuration="43.191347153s" podCreationTimestamp="2025-11-25 15:11:43 +0000 UTC" firstStartedPulling="2025-11-25 15:11:44.214706548 +0000 UTC m=+1062.961840978" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:12:26.184185873 +0000 UTC m=+1104.931320373" watchObservedRunningTime="2025-11-25 15:12:26.191347153 +0000 UTC m=+1104.938481593" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.213794 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.214284 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.459838 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sbwr6"] Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.533416 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mxqb6"] Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.535250 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.537094 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.539275 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-hvl9d"] Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.540213 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.549033 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.574529 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mxqb6"] Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.583005 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hvl9d"] Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.642580 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dmp6l"] Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.673226 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bhjz\" (UniqueName: \"kubernetes.io/projected/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-kube-api-access-4bhjz\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.673305 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-ovn-rundir\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.673336 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-mxqb6\" (UID: \"708a4957-fcb7-4568-850e-a523d38100d3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.673390 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-config\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.673413 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-ovs-rundir\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.673443 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-combined-ca-bundle\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.673474 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-mxqb6\" (UID: \"708a4957-fcb7-4568-850e-a523d38100d3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.673508 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-config\") pod \"dnsmasq-dns-5bf47b49b7-mxqb6\" (UID: \"708a4957-fcb7-4568-850e-a523d38100d3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.673536 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.673578 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnjkq\" (UniqueName: \"kubernetes.io/projected/708a4957-fcb7-4568-850e-a523d38100d3-kube-api-access-fnjkq\") pod \"dnsmasq-dns-5bf47b49b7-mxqb6\" (UID: \"708a4957-fcb7-4568-850e-a523d38100d3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.680739 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-92mmw"] Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.682292 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.687352 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.744639 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.774983 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.775622 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-92mmw\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.775682 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-config\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.775703 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-ovs-rundir\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.775727 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-combined-ca-bundle\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.775747 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-mxqb6\" (UID: \"708a4957-fcb7-4568-850e-a523d38100d3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.775766 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fr7dw\" (UniqueName: \"kubernetes.io/projected/e744e850-243e-41a6-a0a7-564ce5a2c4ba-kube-api-access-fr7dw\") pod \"dnsmasq-dns-8554648995-92mmw\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.775791 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-config\") pod \"dnsmasq-dns-5bf47b49b7-mxqb6\" (UID: \"708a4957-fcb7-4568-850e-a523d38100d3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.775826 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.775849 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-dns-svc\") pod \"dnsmasq-dns-8554648995-92mmw\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.775874 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnjkq\" (UniqueName: \"kubernetes.io/projected/708a4957-fcb7-4568-850e-a523d38100d3-kube-api-access-fnjkq\") pod \"dnsmasq-dns-5bf47b49b7-mxqb6\" (UID: \"708a4957-fcb7-4568-850e-a523d38100d3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.775905 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-config\") pod \"dnsmasq-dns-8554648995-92mmw\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.775942 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-92mmw\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.775959 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bhjz\" (UniqueName: \"kubernetes.io/projected/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-kube-api-access-4bhjz\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.775983 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-ovn-rundir\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.776003 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-mxqb6\" (UID: \"708a4957-fcb7-4568-850e-a523d38100d3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.776883 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-mxqb6\" (UID: \"708a4957-fcb7-4568-850e-a523d38100d3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.777389 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-config\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.777594 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-ovs-rundir\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.779237 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-mxqb6\" (UID: \"708a4957-fcb7-4568-850e-a523d38100d3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.780158 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-config\") pod \"dnsmasq-dns-5bf47b49b7-mxqb6\" (UID: \"708a4957-fcb7-4568-850e-a523d38100d3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.780789 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.784291 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-w8jwx" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.785805 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.784352 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.784943 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-ovn-rundir\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.785066 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.803441 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-combined-ca-bundle\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.808743 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bhjz\" (UniqueName: \"kubernetes.io/projected/e1f1fddc-9c04-4e31-b6c4-528d39e050a8-kube-api-access-4bhjz\") pod \"ovn-controller-metrics-hvl9d\" (UID: \"e1f1fddc-9c04-4e31-b6c4-528d39e050a8\") " pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.810683 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnjkq\" (UniqueName: \"kubernetes.io/projected/708a4957-fcb7-4568-850e-a523d38100d3-kube-api-access-fnjkq\") pod \"dnsmasq-dns-5bf47b49b7-mxqb6\" (UID: \"708a4957-fcb7-4568-850e-a523d38100d3\") " pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.848902 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.857916 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-92mmw"] Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.884477 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.908611 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-hvl9d" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.908785 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-config\") pod \"dnsmasq-dns-8554648995-92mmw\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.908904 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-92mmw\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.908944 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6wgb\" (UniqueName: \"kubernetes.io/projected/dc7742e1-0cb7-4119-be98-f544587f3687-kube-api-access-v6wgb\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.909047 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-92mmw\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.909096 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7742e1-0cb7-4119-be98-f544587f3687-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.909123 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc7742e1-0cb7-4119-be98-f544587f3687-scripts\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.909183 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fr7dw\" (UniqueName: \"kubernetes.io/projected/e744e850-243e-41a6-a0a7-564ce5a2c4ba-kube-api-access-fr7dw\") pod \"dnsmasq-dns-8554648995-92mmw\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.909226 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc7742e1-0cb7-4119-be98-f544587f3687-config\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.909245 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7742e1-0cb7-4119-be98-f544587f3687-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.909285 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7742e1-0cb7-4119-be98-f544587f3687-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.909335 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-dns-svc\") pod \"dnsmasq-dns-8554648995-92mmw\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.909358 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dc7742e1-0cb7-4119-be98-f544587f3687-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.909783 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-92mmw\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.910136 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-92mmw\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.910539 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-dns-svc\") pod \"dnsmasq-dns-8554648995-92mmw\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.911190 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-config\") pod \"dnsmasq-dns-8554648995-92mmw\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:26 crc kubenswrapper[4731]: I1125 15:12:26.937375 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fr7dw\" (UniqueName: \"kubernetes.io/projected/e744e850-243e-41a6-a0a7-564ce5a2c4ba-kube-api-access-fr7dw\") pod \"dnsmasq-dns-8554648995-92mmw\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.011269 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dc7742e1-0cb7-4119-be98-f544587f3687-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.011774 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6wgb\" (UniqueName: \"kubernetes.io/projected/dc7742e1-0cb7-4119-be98-f544587f3687-kube-api-access-v6wgb\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.011887 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7742e1-0cb7-4119-be98-f544587f3687-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.011910 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc7742e1-0cb7-4119-be98-f544587f3687-scripts\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.011942 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc7742e1-0cb7-4119-be98-f544587f3687-config\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.011960 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7742e1-0cb7-4119-be98-f544587f3687-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.011975 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7742e1-0cb7-4119-be98-f544587f3687-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.011687 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dc7742e1-0cb7-4119-be98-f544587f3687-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.013527 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc7742e1-0cb7-4119-be98-f544587f3687-config\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.014550 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dc7742e1-0cb7-4119-be98-f544587f3687-scripts\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.019523 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7742e1-0cb7-4119-be98-f544587f3687-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.020377 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7742e1-0cb7-4119-be98-f544587f3687-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.021124 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7742e1-0cb7-4119-be98-f544587f3687-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.024249 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.036909 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6wgb\" (UniqueName: \"kubernetes.io/projected/dc7742e1-0cb7-4119-be98-f544587f3687-kube-api-access-v6wgb\") pod \"ovn-northd-0\" (UID: \"dc7742e1-0cb7-4119-be98-f544587f3687\") " pod="openstack/ovn-northd-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.163019 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.163327 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" podUID="ba772372-7abb-4468-985f-769cf2ece70b" containerName="dnsmasq-dns" containerID="cri-o://42e560f7f6a4e0f9ab28f351b9e4673464cdf162134df24280e8bb276d521836" gracePeriod=10 Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.163347 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" podUID="42edf0e1-7def-4b40-8b9f-16601761c5bf" containerName="dnsmasq-dns" containerID="cri-o://e02fdcbeaba1fb4ae6a3f06519f1d980669d537440993706c27fc96737748eec" gracePeriod=10 Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.186013 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.201523 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.201564 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.351247 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mxqb6"] Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.460915 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-hvl9d"] Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.542414 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-92mmw"] Nov 25 15:12:27 crc kubenswrapper[4731]: W1125 15:12:27.585412 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode744e850_243e_41a6_a0a7_564ce5a2c4ba.slice/crio-4fa4dc23b3185fcaebe625739e40c20d544914689f0dca4f535b781ab440dd4b WatchSource:0}: Error finding container 4fa4dc23b3185fcaebe625739e40c20d544914689f0dca4f535b781ab440dd4b: Status 404 returned error can't find the container with id 4fa4dc23b3185fcaebe625739e40c20d544914689f0dca4f535b781ab440dd4b Nov 25 15:12:27 crc kubenswrapper[4731]: I1125 15:12:27.705323 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Nov 25 15:12:27 crc kubenswrapper[4731]: W1125 15:12:27.710083 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc7742e1_0cb7_4119_be98_f544587f3687.slice/crio-b99ef138e1949e03297b432bcb96cce1bc216d09dbcf556de9208b5eff4068b1 WatchSource:0}: Error finding container b99ef138e1949e03297b432bcb96cce1bc216d09dbcf556de9208b5eff4068b1: Status 404 returned error can't find the container with id b99ef138e1949e03297b432bcb96cce1bc216d09dbcf556de9208b5eff4068b1 Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.172647 4731 generic.go:334] "Generic (PLEG): container finished" podID="ba772372-7abb-4468-985f-769cf2ece70b" containerID="42e560f7f6a4e0f9ab28f351b9e4673464cdf162134df24280e8bb276d521836" exitCode=0 Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.172733 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" event={"ID":"ba772372-7abb-4468-985f-769cf2ece70b","Type":"ContainerDied","Data":"42e560f7f6a4e0f9ab28f351b9e4673464cdf162134df24280e8bb276d521836"} Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.173080 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" event={"ID":"ba772372-7abb-4468-985f-769cf2ece70b","Type":"ContainerDied","Data":"8bab5deca7373f46f3ba6502728ed7368e34d2b25adb7f4f6513be238edc7c7c"} Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.173102 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bab5deca7373f46f3ba6502728ed7368e34d2b25adb7f4f6513be238edc7c7c" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.174884 4731 generic.go:334] "Generic (PLEG): container finished" podID="708a4957-fcb7-4568-850e-a523d38100d3" containerID="da4cbe069060eaddd906aad69f40a851f39f3f8c23b53706f8d86ffcbe5bee05" exitCode=0 Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.174948 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" event={"ID":"708a4957-fcb7-4568-850e-a523d38100d3","Type":"ContainerDied","Data":"da4cbe069060eaddd906aad69f40a851f39f3f8c23b53706f8d86ffcbe5bee05"} Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.174976 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" event={"ID":"708a4957-fcb7-4568-850e-a523d38100d3","Type":"ContainerStarted","Data":"872599a18836e8011a5a07750180a4418b1dc24d91955d989fef8981d56a3095"} Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.177587 4731 generic.go:334] "Generic (PLEG): container finished" podID="42edf0e1-7def-4b40-8b9f-16601761c5bf" containerID="e02fdcbeaba1fb4ae6a3f06519f1d980669d537440993706c27fc96737748eec" exitCode=0 Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.177608 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" event={"ID":"42edf0e1-7def-4b40-8b9f-16601761c5bf","Type":"ContainerDied","Data":"e02fdcbeaba1fb4ae6a3f06519f1d980669d537440993706c27fc96737748eec"} Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.177649 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" event={"ID":"42edf0e1-7def-4b40-8b9f-16601761c5bf","Type":"ContainerDied","Data":"6dd39291d4372f1ded8e61bf131394f9fc1bc51b8d4dfc409b6385ad8e4d7700"} Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.177664 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dd39291d4372f1ded8e61bf131394f9fc1bc51b8d4dfc409b6385ad8e4d7700" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.179082 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"dc7742e1-0cb7-4119-be98-f544587f3687","Type":"ContainerStarted","Data":"b99ef138e1949e03297b432bcb96cce1bc216d09dbcf556de9208b5eff4068b1"} Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.183165 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hvl9d" event={"ID":"e1f1fddc-9c04-4e31-b6c4-528d39e050a8","Type":"ContainerStarted","Data":"659c53160273c81602a684957e1f414d0a2b4da9d04b800f4afcf556d8c4a763"} Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.183209 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-hvl9d" event={"ID":"e1f1fddc-9c04-4e31-b6c4-528d39e050a8","Type":"ContainerStarted","Data":"07dba1275c5781d067829a8bc9cf9ad6c3bc81a014b98ff37de7374c72c9a63d"} Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.199506 4731 generic.go:334] "Generic (PLEG): container finished" podID="e744e850-243e-41a6-a0a7-564ce5a2c4ba" containerID="ed15a7fdf87593dced3dde7d60cfe24fc4eb81538e7f21e59b6581a2718cafac" exitCode=0 Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.199790 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-92mmw" event={"ID":"e744e850-243e-41a6-a0a7-564ce5a2c4ba","Type":"ContainerDied","Data":"ed15a7fdf87593dced3dde7d60cfe24fc4eb81538e7f21e59b6581a2718cafac"} Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.199840 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-92mmw" event={"ID":"e744e850-243e-41a6-a0a7-564ce5a2c4ba","Type":"ContainerStarted","Data":"4fa4dc23b3185fcaebe625739e40c20d544914689f0dca4f535b781ab440dd4b"} Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.235108 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-hvl9d" podStartSLOduration=2.235086061 podStartE2EDuration="2.235086061s" podCreationTimestamp="2025-11-25 15:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:12:28.23185933 +0000 UTC m=+1106.978993770" watchObservedRunningTime="2025-11-25 15:12:28.235086061 +0000 UTC m=+1106.982220491" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.294912 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.309668 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.315451 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.338695 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.339138 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42edf0e1-7def-4b40-8b9f-16601761c5bf-config\") pod \"42edf0e1-7def-4b40-8b9f-16601761c5bf\" (UID: \"42edf0e1-7def-4b40-8b9f-16601761c5bf\") " Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.339327 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhjtg\" (UniqueName: \"kubernetes.io/projected/42edf0e1-7def-4b40-8b9f-16601761c5bf-kube-api-access-xhjtg\") pod \"42edf0e1-7def-4b40-8b9f-16601761c5bf\" (UID: \"42edf0e1-7def-4b40-8b9f-16601761c5bf\") " Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.339538 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42edf0e1-7def-4b40-8b9f-16601761c5bf-dns-svc\") pod \"42edf0e1-7def-4b40-8b9f-16601761c5bf\" (UID: \"42edf0e1-7def-4b40-8b9f-16601761c5bf\") " Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.362091 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42edf0e1-7def-4b40-8b9f-16601761c5bf-kube-api-access-xhjtg" (OuterVolumeSpecName: "kube-api-access-xhjtg") pod "42edf0e1-7def-4b40-8b9f-16601761c5bf" (UID: "42edf0e1-7def-4b40-8b9f-16601761c5bf"). InnerVolumeSpecName "kube-api-access-xhjtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.441389 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdblr\" (UniqueName: \"kubernetes.io/projected/ba772372-7abb-4468-985f-769cf2ece70b-kube-api-access-pdblr\") pod \"ba772372-7abb-4468-985f-769cf2ece70b\" (UID: \"ba772372-7abb-4468-985f-769cf2ece70b\") " Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.441428 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba772372-7abb-4468-985f-769cf2ece70b-config\") pod \"ba772372-7abb-4468-985f-769cf2ece70b\" (UID: \"ba772372-7abb-4468-985f-769cf2ece70b\") " Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.441479 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba772372-7abb-4468-985f-769cf2ece70b-dns-svc\") pod \"ba772372-7abb-4468-985f-769cf2ece70b\" (UID: \"ba772372-7abb-4468-985f-769cf2ece70b\") " Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.441838 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhjtg\" (UniqueName: \"kubernetes.io/projected/42edf0e1-7def-4b40-8b9f-16601761c5bf-kube-api-access-xhjtg\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.443114 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42edf0e1-7def-4b40-8b9f-16601761c5bf-config" (OuterVolumeSpecName: "config") pod "42edf0e1-7def-4b40-8b9f-16601761c5bf" (UID: "42edf0e1-7def-4b40-8b9f-16601761c5bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.448905 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba772372-7abb-4468-985f-769cf2ece70b-kube-api-access-pdblr" (OuterVolumeSpecName: "kube-api-access-pdblr") pod "ba772372-7abb-4468-985f-769cf2ece70b" (UID: "ba772372-7abb-4468-985f-769cf2ece70b"). InnerVolumeSpecName "kube-api-access-pdblr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.475712 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42edf0e1-7def-4b40-8b9f-16601761c5bf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "42edf0e1-7def-4b40-8b9f-16601761c5bf" (UID: "42edf0e1-7def-4b40-8b9f-16601761c5bf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.492238 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba772372-7abb-4468-985f-769cf2ece70b-config" (OuterVolumeSpecName: "config") pod "ba772372-7abb-4468-985f-769cf2ece70b" (UID: "ba772372-7abb-4468-985f-769cf2ece70b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.508423 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba772372-7abb-4468-985f-769cf2ece70b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ba772372-7abb-4468-985f-769cf2ece70b" (UID: "ba772372-7abb-4468-985f-769cf2ece70b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.542810 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdblr\" (UniqueName: \"kubernetes.io/projected/ba772372-7abb-4468-985f-769cf2ece70b-kube-api-access-pdblr\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.542885 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba772372-7abb-4468-985f-769cf2ece70b-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.542896 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42edf0e1-7def-4b40-8b9f-16601761c5bf-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.542906 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ba772372-7abb-4468-985f-769cf2ece70b-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.542914 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/42edf0e1-7def-4b40-8b9f-16601761c5bf-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:28 crc kubenswrapper[4731]: I1125 15:12:28.683570 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Nov 25 15:12:28 crc kubenswrapper[4731]: E1125 15:12:28.997487 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42edf0e1_7def_4b40_8b9f_16601761c5bf.slice/crio-6dd39291d4372f1ded8e61bf131394f9fc1bc51b8d4dfc409b6385ad8e4d7700\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod42edf0e1_7def_4b40_8b9f_16601761c5bf.slice\": RecentStats: unable to find data in memory cache]" Nov 25 15:12:29 crc kubenswrapper[4731]: I1125 15:12:29.217786 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-92mmw" event={"ID":"e744e850-243e-41a6-a0a7-564ce5a2c4ba","Type":"ContainerStarted","Data":"74ef5049b547c2aff5288789bb966aa69b75cc19cbd4b5604e632069a1e9d754"} Nov 25 15:12:29 crc kubenswrapper[4731]: I1125 15:12:29.225065 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-dmp6l" Nov 25 15:12:29 crc kubenswrapper[4731]: I1125 15:12:29.225529 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-sbwr6" Nov 25 15:12:29 crc kubenswrapper[4731]: I1125 15:12:29.226103 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" event={"ID":"708a4957-fcb7-4568-850e-a523d38100d3","Type":"ContainerStarted","Data":"f13b07ecaa114ef35b35094d2e5b61e170401d09db7984d5e3005fee0487b7e1"} Nov 25 15:12:29 crc kubenswrapper[4731]: I1125 15:12:29.243287 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-92mmw" podStartSLOduration=3.243264503 podStartE2EDuration="3.243264503s" podCreationTimestamp="2025-11-25 15:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:12:29.239492188 +0000 UTC m=+1107.986626618" watchObservedRunningTime="2025-11-25 15:12:29.243264503 +0000 UTC m=+1107.990398943" Nov 25 15:12:29 crc kubenswrapper[4731]: I1125 15:12:29.264711 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" podStartSLOduration=3.2646946 podStartE2EDuration="3.2646946s" podCreationTimestamp="2025-11-25 15:12:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:12:29.263857699 +0000 UTC m=+1108.010992139" watchObservedRunningTime="2025-11-25 15:12:29.2646946 +0000 UTC m=+1108.011829030" Nov 25 15:12:29 crc kubenswrapper[4731]: I1125 15:12:29.291033 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dmp6l"] Nov 25 15:12:29 crc kubenswrapper[4731]: I1125 15:12:29.301085 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-dmp6l"] Nov 25 15:12:29 crc kubenswrapper[4731]: I1125 15:12:29.310776 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sbwr6"] Nov 25 15:12:29 crc kubenswrapper[4731]: I1125 15:12:29.316599 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-sbwr6"] Nov 25 15:12:29 crc kubenswrapper[4731]: I1125 15:12:29.457353 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Nov 25 15:12:29 crc kubenswrapper[4731]: I1125 15:12:29.538773 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.234569 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"dc7742e1-0cb7-4119-be98-f544587f3687","Type":"ContainerStarted","Data":"5f634f4e9e5331608760fc79552f2323f7cf26c712f5b43050d4f22457531da2"} Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.235389 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.235412 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"dc7742e1-0cb7-4119-be98-f544587f3687","Type":"ContainerStarted","Data":"cc6ef1ff2cc31372341da961bb7b2ef8fc2d057cbd2c6a2e7c605b18e1ad68bf"} Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.235436 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.235453 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.260382 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.891662289 podStartE2EDuration="4.260356577s" podCreationTimestamp="2025-11-25 15:12:26 +0000 UTC" firstStartedPulling="2025-11-25 15:12:27.71960652 +0000 UTC m=+1106.466740960" lastFinishedPulling="2025-11-25 15:12:29.088300808 +0000 UTC m=+1107.835435248" observedRunningTime="2025-11-25 15:12:30.253437623 +0000 UTC m=+1109.000572053" watchObservedRunningTime="2025-11-25 15:12:30.260356577 +0000 UTC m=+1109.007491037" Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.718196 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.824755 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42edf0e1-7def-4b40-8b9f-16601761c5bf" path="/var/lib/kubelet/pods/42edf0e1-7def-4b40-8b9f-16601761c5bf/volumes" Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.826112 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba772372-7abb-4468-985f-769cf2ece70b" path="/var/lib/kubelet/pods/ba772372-7abb-4468-985f-769cf2ece70b/volumes" Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.889665 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mxqb6"] Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.934357 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-2swq5"] Nov 25 15:12:30 crc kubenswrapper[4731]: E1125 15:12:30.934679 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42edf0e1-7def-4b40-8b9f-16601761c5bf" containerName="init" Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.934690 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="42edf0e1-7def-4b40-8b9f-16601761c5bf" containerName="init" Nov 25 15:12:30 crc kubenswrapper[4731]: E1125 15:12:30.934715 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba772372-7abb-4468-985f-769cf2ece70b" containerName="init" Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.934721 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba772372-7abb-4468-985f-769cf2ece70b" containerName="init" Nov 25 15:12:30 crc kubenswrapper[4731]: E1125 15:12:30.934736 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42edf0e1-7def-4b40-8b9f-16601761c5bf" containerName="dnsmasq-dns" Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.934742 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="42edf0e1-7def-4b40-8b9f-16601761c5bf" containerName="dnsmasq-dns" Nov 25 15:12:30 crc kubenswrapper[4731]: E1125 15:12:30.934755 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba772372-7abb-4468-985f-769cf2ece70b" containerName="dnsmasq-dns" Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.934761 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba772372-7abb-4468-985f-769cf2ece70b" containerName="dnsmasq-dns" Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.934935 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba772372-7abb-4468-985f-769cf2ece70b" containerName="dnsmasq-dns" Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.934947 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="42edf0e1-7def-4b40-8b9f-16601761c5bf" containerName="dnsmasq-dns" Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.935702 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.958510 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-2swq5"] Nov 25 15:12:30 crc kubenswrapper[4731]: I1125 15:12:30.969971 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.075910 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.084217 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-2swq5\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.084344 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c96m\" (UniqueName: \"kubernetes.io/projected/e032f375-15e6-4cbb-adaf-ec7187abecba-kube-api-access-6c96m\") pod \"dnsmasq-dns-b8fbc5445-2swq5\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.084403 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-2swq5\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.084421 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-config\") pod \"dnsmasq-dns-b8fbc5445-2swq5\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.084443 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-2swq5\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.186014 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-2swq5\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.186065 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-config\") pod \"dnsmasq-dns-b8fbc5445-2swq5\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.186102 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-2swq5\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.186174 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-2swq5\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.186289 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c96m\" (UniqueName: \"kubernetes.io/projected/e032f375-15e6-4cbb-adaf-ec7187abecba-kube-api-access-6c96m\") pod \"dnsmasq-dns-b8fbc5445-2swq5\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.187466 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-config\") pod \"dnsmasq-dns-b8fbc5445-2swq5\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.187508 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-2swq5\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.187508 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-2swq5\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.187741 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-2swq5\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.216672 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c96m\" (UniqueName: \"kubernetes.io/projected/e032f375-15e6-4cbb-adaf-ec7187abecba-kube-api-access-6c96m\") pod \"dnsmasq-dns-b8fbc5445-2swq5\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.254351 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:31 crc kubenswrapper[4731]: I1125 15:12:31.677456 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-2swq5"] Nov 25 15:12:31 crc kubenswrapper[4731]: W1125 15:12:31.678861 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode032f375_15e6_4cbb_adaf_ec7187abecba.slice/crio-c524aa9fc6ab18a8e62c39cb7dd84ab62d2eac179fdf0496a3dbaa8e10eb4d66 WatchSource:0}: Error finding container c524aa9fc6ab18a8e62c39cb7dd84ab62d2eac179fdf0496a3dbaa8e10eb4d66: Status 404 returned error can't find the container with id c524aa9fc6ab18a8e62c39cb7dd84ab62d2eac179fdf0496a3dbaa8e10eb4d66 Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.032878 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.038252 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.048800 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.048799 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.048885 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.048929 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-ntm5n" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.065723 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.201601 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7e07b944-1135-4cda-ae5f-6c387117d5bd-cache\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.201637 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7e07b944-1135-4cda-ae5f-6c387117d5bd-lock\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.201682 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m9vt\" (UniqueName: \"kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-kube-api-access-4m9vt\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.201721 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.201806 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.251189 4731 generic.go:334] "Generic (PLEG): container finished" podID="e032f375-15e6-4cbb-adaf-ec7187abecba" containerID="dad39291fcc899ee19ab7172511e80c997e9d62c967a1e381a6c890767c9fcf8" exitCode=0 Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.251296 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" event={"ID":"e032f375-15e6-4cbb-adaf-ec7187abecba","Type":"ContainerDied","Data":"dad39291fcc899ee19ab7172511e80c997e9d62c967a1e381a6c890767c9fcf8"} Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.251490 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" event={"ID":"e032f375-15e6-4cbb-adaf-ec7187abecba","Type":"ContainerStarted","Data":"c524aa9fc6ab18a8e62c39cb7dd84ab62d2eac179fdf0496a3dbaa8e10eb4d66"} Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.251677 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" podUID="708a4957-fcb7-4568-850e-a523d38100d3" containerName="dnsmasq-dns" containerID="cri-o://f13b07ecaa114ef35b35094d2e5b61e170401d09db7984d5e3005fee0487b7e1" gracePeriod=10 Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.304006 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.305619 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7e07b944-1135-4cda-ae5f-6c387117d5bd-cache\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: E1125 15:12:32.304842 4731 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 15:12:32 crc kubenswrapper[4731]: E1125 15:12:32.305780 4731 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 15:12:32 crc kubenswrapper[4731]: E1125 15:12:32.305879 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift podName:7e07b944-1135-4cda-ae5f-6c387117d5bd nodeName:}" failed. No retries permitted until 2025-11-25 15:12:32.80585023 +0000 UTC m=+1111.552984670 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift") pod "swift-storage-0" (UID: "7e07b944-1135-4cda-ae5f-6c387117d5bd") : configmap "swift-ring-files" not found Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.305745 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7e07b944-1135-4cda-ae5f-6c387117d5bd-lock\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.305975 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m9vt\" (UniqueName: \"kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-kube-api-access-4m9vt\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.306032 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.306225 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/7e07b944-1135-4cda-ae5f-6c387117d5bd-cache\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.307072 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/7e07b944-1135-4cda-ae5f-6c387117d5bd-lock\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.345852 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.355578 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m9vt\" (UniqueName: \"kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-kube-api-access-4m9vt\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.372570 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.531822 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-br7gv"] Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.532850 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.536033 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.536197 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.536301 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.542242 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-br7gv"] Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.611982 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-combined-ca-bundle\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.612052 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-etc-swift\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.612074 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-dispersionconf\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.612094 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjchb\" (UniqueName: \"kubernetes.io/projected/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-kube-api-access-qjchb\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.612118 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-swiftconf\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.612174 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-scripts\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.612205 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-ring-data-devices\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.613871 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.713700 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fnjkq\" (UniqueName: \"kubernetes.io/projected/708a4957-fcb7-4568-850e-a523d38100d3-kube-api-access-fnjkq\") pod \"708a4957-fcb7-4568-850e-a523d38100d3\" (UID: \"708a4957-fcb7-4568-850e-a523d38100d3\") " Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.714112 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-dns-svc\") pod \"708a4957-fcb7-4568-850e-a523d38100d3\" (UID: \"708a4957-fcb7-4568-850e-a523d38100d3\") " Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.714174 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-ovsdbserver-nb\") pod \"708a4957-fcb7-4568-850e-a523d38100d3\" (UID: \"708a4957-fcb7-4568-850e-a523d38100d3\") " Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.714239 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-config\") pod \"708a4957-fcb7-4568-850e-a523d38100d3\" (UID: \"708a4957-fcb7-4568-850e-a523d38100d3\") " Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.714525 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjchb\" (UniqueName: \"kubernetes.io/projected/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-kube-api-access-qjchb\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.714570 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-swiftconf\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.714645 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-scripts\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.714697 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-ring-data-devices\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.714732 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-combined-ca-bundle\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.714780 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-etc-swift\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.714805 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-dispersionconf\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.715766 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-etc-swift\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.716268 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-ring-data-devices\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.717002 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-scripts\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.719675 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-swiftconf\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.719858 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-dispersionconf\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.720744 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-combined-ca-bundle\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.727547 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/708a4957-fcb7-4568-850e-a523d38100d3-kube-api-access-fnjkq" (OuterVolumeSpecName: "kube-api-access-fnjkq") pod "708a4957-fcb7-4568-850e-a523d38100d3" (UID: "708a4957-fcb7-4568-850e-a523d38100d3"). InnerVolumeSpecName "kube-api-access-fnjkq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.730908 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjchb\" (UniqueName: \"kubernetes.io/projected/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-kube-api-access-qjchb\") pod \"swift-ring-rebalance-br7gv\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.754197 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "708a4957-fcb7-4568-850e-a523d38100d3" (UID: "708a4957-fcb7-4568-850e-a523d38100d3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.758452 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-config" (OuterVolumeSpecName: "config") pod "708a4957-fcb7-4568-850e-a523d38100d3" (UID: "708a4957-fcb7-4568-850e-a523d38100d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.761484 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "708a4957-fcb7-4568-850e-a523d38100d3" (UID: "708a4957-fcb7-4568-850e-a523d38100d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.815871 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.816006 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.816017 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fnjkq\" (UniqueName: \"kubernetes.io/projected/708a4957-fcb7-4568-850e-a523d38100d3-kube-api-access-fnjkq\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.816027 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.816037 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/708a4957-fcb7-4568-850e-a523d38100d3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:32 crc kubenswrapper[4731]: E1125 15:12:32.816046 4731 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 15:12:32 crc kubenswrapper[4731]: E1125 15:12:32.816104 4731 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 15:12:32 crc kubenswrapper[4731]: E1125 15:12:32.816184 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift podName:7e07b944-1135-4cda-ae5f-6c387117d5bd nodeName:}" failed. No retries permitted until 2025-11-25 15:12:33.816164922 +0000 UTC m=+1112.563299352 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift") pod "swift-storage-0" (UID: "7e07b944-1135-4cda-ae5f-6c387117d5bd") : configmap "swift-ring-files" not found Nov 25 15:12:32 crc kubenswrapper[4731]: I1125 15:12:32.858059 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.262112 4731 generic.go:334] "Generic (PLEG): container finished" podID="708a4957-fcb7-4568-850e-a523d38100d3" containerID="f13b07ecaa114ef35b35094d2e5b61e170401d09db7984d5e3005fee0487b7e1" exitCode=0 Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.262181 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" event={"ID":"708a4957-fcb7-4568-850e-a523d38100d3","Type":"ContainerDied","Data":"f13b07ecaa114ef35b35094d2e5b61e170401d09db7984d5e3005fee0487b7e1"} Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.262213 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" event={"ID":"708a4957-fcb7-4568-850e-a523d38100d3","Type":"ContainerDied","Data":"872599a18836e8011a5a07750180a4418b1dc24d91955d989fef8981d56a3095"} Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.262234 4731 scope.go:117] "RemoveContainer" containerID="f13b07ecaa114ef35b35094d2e5b61e170401d09db7984d5e3005fee0487b7e1" Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.262365 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-mxqb6" Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.269894 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" event={"ID":"e032f375-15e6-4cbb-adaf-ec7187abecba","Type":"ContainerStarted","Data":"5e7598b1563f22146a5642a2b91b284e0289259a66cd71559737621209651d0c"} Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.271070 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.296005 4731 scope.go:117] "RemoveContainer" containerID="da4cbe069060eaddd906aad69f40a851f39f3f8c23b53706f8d86ffcbe5bee05" Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.298009 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-br7gv"] Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.306374 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" podStartSLOduration=3.306348499 podStartE2EDuration="3.306348499s" podCreationTimestamp="2025-11-25 15:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:12:33.290866371 +0000 UTC m=+1112.038000801" watchObservedRunningTime="2025-11-25 15:12:33.306348499 +0000 UTC m=+1112.053482929" Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.321665 4731 scope.go:117] "RemoveContainer" containerID="f13b07ecaa114ef35b35094d2e5b61e170401d09db7984d5e3005fee0487b7e1" Nov 25 15:12:33 crc kubenswrapper[4731]: E1125 15:12:33.322932 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f13b07ecaa114ef35b35094d2e5b61e170401d09db7984d5e3005fee0487b7e1\": container with ID starting with f13b07ecaa114ef35b35094d2e5b61e170401d09db7984d5e3005fee0487b7e1 not found: ID does not exist" containerID="f13b07ecaa114ef35b35094d2e5b61e170401d09db7984d5e3005fee0487b7e1" Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.322998 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f13b07ecaa114ef35b35094d2e5b61e170401d09db7984d5e3005fee0487b7e1"} err="failed to get container status \"f13b07ecaa114ef35b35094d2e5b61e170401d09db7984d5e3005fee0487b7e1\": rpc error: code = NotFound desc = could not find container \"f13b07ecaa114ef35b35094d2e5b61e170401d09db7984d5e3005fee0487b7e1\": container with ID starting with f13b07ecaa114ef35b35094d2e5b61e170401d09db7984d5e3005fee0487b7e1 not found: ID does not exist" Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.323035 4731 scope.go:117] "RemoveContainer" containerID="da4cbe069060eaddd906aad69f40a851f39f3f8c23b53706f8d86ffcbe5bee05" Nov 25 15:12:33 crc kubenswrapper[4731]: E1125 15:12:33.323764 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da4cbe069060eaddd906aad69f40a851f39f3f8c23b53706f8d86ffcbe5bee05\": container with ID starting with da4cbe069060eaddd906aad69f40a851f39f3f8c23b53706f8d86ffcbe5bee05 not found: ID does not exist" containerID="da4cbe069060eaddd906aad69f40a851f39f3f8c23b53706f8d86ffcbe5bee05" Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.323804 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da4cbe069060eaddd906aad69f40a851f39f3f8c23b53706f8d86ffcbe5bee05"} err="failed to get container status \"da4cbe069060eaddd906aad69f40a851f39f3f8c23b53706f8d86ffcbe5bee05\": rpc error: code = NotFound desc = could not find container \"da4cbe069060eaddd906aad69f40a851f39f3f8c23b53706f8d86ffcbe5bee05\": container with ID starting with da4cbe069060eaddd906aad69f40a851f39f3f8c23b53706f8d86ffcbe5bee05 not found: ID does not exist" Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.325228 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mxqb6"] Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.334628 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-mxqb6"] Nov 25 15:12:33 crc kubenswrapper[4731]: I1125 15:12:33.832030 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:33 crc kubenswrapper[4731]: E1125 15:12:33.832263 4731 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 15:12:33 crc kubenswrapper[4731]: E1125 15:12:33.832289 4731 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 15:12:33 crc kubenswrapper[4731]: E1125 15:12:33.832345 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift podName:7e07b944-1135-4cda-ae5f-6c387117d5bd nodeName:}" failed. No retries permitted until 2025-11-25 15:12:35.832327332 +0000 UTC m=+1114.579461762 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift") pod "swift-storage-0" (UID: "7e07b944-1135-4cda-ae5f-6c387117d5bd") : configmap "swift-ring-files" not found Nov 25 15:12:34 crc kubenswrapper[4731]: I1125 15:12:34.279186 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-br7gv" event={"ID":"6528b54d-5bbd-4f24-9041-e1dd5fef70c7","Type":"ContainerStarted","Data":"1db8cffd72939ef08cb726b3ee0134dc31ae71e3ee7591bf92a683ccbef97fe3"} Nov 25 15:12:34 crc kubenswrapper[4731]: I1125 15:12:34.827997 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="708a4957-fcb7-4568-850e-a523d38100d3" path="/var/lib/kubelet/pods/708a4957-fcb7-4568-850e-a523d38100d3/volumes" Nov 25 15:12:35 crc kubenswrapper[4731]: I1125 15:12:35.867991 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:35 crc kubenswrapper[4731]: E1125 15:12:35.868244 4731 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 15:12:35 crc kubenswrapper[4731]: E1125 15:12:35.868277 4731 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 15:12:35 crc kubenswrapper[4731]: E1125 15:12:35.868339 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift podName:7e07b944-1135-4cda-ae5f-6c387117d5bd nodeName:}" failed. No retries permitted until 2025-11-25 15:12:39.868322747 +0000 UTC m=+1118.615457177 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift") pod "swift-storage-0" (UID: "7e07b944-1135-4cda-ae5f-6c387117d5bd") : configmap "swift-ring-files" not found Nov 25 15:12:37 crc kubenswrapper[4731]: I1125 15:12:37.026049 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.398040 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-n2zrw"] Nov 25 15:12:38 crc kubenswrapper[4731]: E1125 15:12:38.398678 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708a4957-fcb7-4568-850e-a523d38100d3" containerName="dnsmasq-dns" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.398691 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="708a4957-fcb7-4568-850e-a523d38100d3" containerName="dnsmasq-dns" Nov 25 15:12:38 crc kubenswrapper[4731]: E1125 15:12:38.398711 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="708a4957-fcb7-4568-850e-a523d38100d3" containerName="init" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.398719 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="708a4957-fcb7-4568-850e-a523d38100d3" containerName="init" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.398901 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="708a4957-fcb7-4568-850e-a523d38100d3" containerName="dnsmasq-dns" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.407338 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n2zrw" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.427371 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-n2zrw"] Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.437727 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7e51-account-create-2kjwl"] Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.439041 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7e51-account-create-2kjwl" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.443419 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.460138 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7e51-account-create-2kjwl"] Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.512931 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4rt9\" (UniqueName: \"kubernetes.io/projected/e05a140c-c361-4e2a-9d20-822743eeceba-kube-api-access-k4rt9\") pod \"keystone-7e51-account-create-2kjwl\" (UID: \"e05a140c-c361-4e2a-9d20-822743eeceba\") " pod="openstack/keystone-7e51-account-create-2kjwl" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.513060 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05a140c-c361-4e2a-9d20-822743eeceba-operator-scripts\") pod \"keystone-7e51-account-create-2kjwl\" (UID: \"e05a140c-c361-4e2a-9d20-822743eeceba\") " pod="openstack/keystone-7e51-account-create-2kjwl" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.513257 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khdnj\" (UniqueName: \"kubernetes.io/projected/4092bd55-8a71-4286-a6e3-eca18e3ced0b-kube-api-access-khdnj\") pod \"keystone-db-create-n2zrw\" (UID: \"4092bd55-8a71-4286-a6e3-eca18e3ced0b\") " pod="openstack/keystone-db-create-n2zrw" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.513340 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4092bd55-8a71-4286-a6e3-eca18e3ced0b-operator-scripts\") pod \"keystone-db-create-n2zrw\" (UID: \"4092bd55-8a71-4286-a6e3-eca18e3ced0b\") " pod="openstack/keystone-db-create-n2zrw" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.602785 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-q8jfj"] Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.604071 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q8jfj" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.615073 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4092bd55-8a71-4286-a6e3-eca18e3ced0b-operator-scripts\") pod \"keystone-db-create-n2zrw\" (UID: \"4092bd55-8a71-4286-a6e3-eca18e3ced0b\") " pod="openstack/keystone-db-create-n2zrw" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.615192 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4rt9\" (UniqueName: \"kubernetes.io/projected/e05a140c-c361-4e2a-9d20-822743eeceba-kube-api-access-k4rt9\") pod \"keystone-7e51-account-create-2kjwl\" (UID: \"e05a140c-c361-4e2a-9d20-822743eeceba\") " pod="openstack/keystone-7e51-account-create-2kjwl" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.615238 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05a140c-c361-4e2a-9d20-822743eeceba-operator-scripts\") pod \"keystone-7e51-account-create-2kjwl\" (UID: \"e05a140c-c361-4e2a-9d20-822743eeceba\") " pod="openstack/keystone-7e51-account-create-2kjwl" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.615318 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khdnj\" (UniqueName: \"kubernetes.io/projected/4092bd55-8a71-4286-a6e3-eca18e3ced0b-kube-api-access-khdnj\") pod \"keystone-db-create-n2zrw\" (UID: \"4092bd55-8a71-4286-a6e3-eca18e3ced0b\") " pod="openstack/keystone-db-create-n2zrw" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.616125 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4092bd55-8a71-4286-a6e3-eca18e3ced0b-operator-scripts\") pod \"keystone-db-create-n2zrw\" (UID: \"4092bd55-8a71-4286-a6e3-eca18e3ced0b\") " pod="openstack/keystone-db-create-n2zrw" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.616347 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05a140c-c361-4e2a-9d20-822743eeceba-operator-scripts\") pod \"keystone-7e51-account-create-2kjwl\" (UID: \"e05a140c-c361-4e2a-9d20-822743eeceba\") " pod="openstack/keystone-7e51-account-create-2kjwl" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.628876 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-q8jfj"] Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.640459 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-18b0-account-create-bnvc5"] Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.641992 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-18b0-account-create-bnvc5" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.643389 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khdnj\" (UniqueName: \"kubernetes.io/projected/4092bd55-8a71-4286-a6e3-eca18e3ced0b-kube-api-access-khdnj\") pod \"keystone-db-create-n2zrw\" (UID: \"4092bd55-8a71-4286-a6e3-eca18e3ced0b\") " pod="openstack/keystone-db-create-n2zrw" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.645711 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.653559 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4rt9\" (UniqueName: \"kubernetes.io/projected/e05a140c-c361-4e2a-9d20-822743eeceba-kube-api-access-k4rt9\") pod \"keystone-7e51-account-create-2kjwl\" (UID: \"e05a140c-c361-4e2a-9d20-822743eeceba\") " pod="openstack/keystone-7e51-account-create-2kjwl" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.664075 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-18b0-account-create-bnvc5"] Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.716476 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmjxf\" (UniqueName: \"kubernetes.io/projected/42f1fb22-05ce-49c4-b0fb-fd0c98a81216-kube-api-access-nmjxf\") pod \"placement-18b0-account-create-bnvc5\" (UID: \"42f1fb22-05ce-49c4-b0fb-fd0c98a81216\") " pod="openstack/placement-18b0-account-create-bnvc5" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.716922 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be16145-50e5-4de7-99c1-ad67f473f16f-operator-scripts\") pod \"placement-db-create-q8jfj\" (UID: \"7be16145-50e5-4de7-99c1-ad67f473f16f\") " pod="openstack/placement-db-create-q8jfj" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.716994 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42f1fb22-05ce-49c4-b0fb-fd0c98a81216-operator-scripts\") pod \"placement-18b0-account-create-bnvc5\" (UID: \"42f1fb22-05ce-49c4-b0fb-fd0c98a81216\") " pod="openstack/placement-18b0-account-create-bnvc5" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.717045 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5tlz\" (UniqueName: \"kubernetes.io/projected/7be16145-50e5-4de7-99c1-ad67f473f16f-kube-api-access-p5tlz\") pod \"placement-db-create-q8jfj\" (UID: \"7be16145-50e5-4de7-99c1-ad67f473f16f\") " pod="openstack/placement-db-create-q8jfj" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.753781 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n2zrw" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.764359 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7e51-account-create-2kjwl" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.818437 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5tlz\" (UniqueName: \"kubernetes.io/projected/7be16145-50e5-4de7-99c1-ad67f473f16f-kube-api-access-p5tlz\") pod \"placement-db-create-q8jfj\" (UID: \"7be16145-50e5-4de7-99c1-ad67f473f16f\") " pod="openstack/placement-db-create-q8jfj" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.818498 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmjxf\" (UniqueName: \"kubernetes.io/projected/42f1fb22-05ce-49c4-b0fb-fd0c98a81216-kube-api-access-nmjxf\") pod \"placement-18b0-account-create-bnvc5\" (UID: \"42f1fb22-05ce-49c4-b0fb-fd0c98a81216\") " pod="openstack/placement-18b0-account-create-bnvc5" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.818595 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be16145-50e5-4de7-99c1-ad67f473f16f-operator-scripts\") pod \"placement-db-create-q8jfj\" (UID: \"7be16145-50e5-4de7-99c1-ad67f473f16f\") " pod="openstack/placement-db-create-q8jfj" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.818680 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42f1fb22-05ce-49c4-b0fb-fd0c98a81216-operator-scripts\") pod \"placement-18b0-account-create-bnvc5\" (UID: \"42f1fb22-05ce-49c4-b0fb-fd0c98a81216\") " pod="openstack/placement-18b0-account-create-bnvc5" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.819948 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42f1fb22-05ce-49c4-b0fb-fd0c98a81216-operator-scripts\") pod \"placement-18b0-account-create-bnvc5\" (UID: \"42f1fb22-05ce-49c4-b0fb-fd0c98a81216\") " pod="openstack/placement-18b0-account-create-bnvc5" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.823012 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be16145-50e5-4de7-99c1-ad67f473f16f-operator-scripts\") pod \"placement-db-create-q8jfj\" (UID: \"7be16145-50e5-4de7-99c1-ad67f473f16f\") " pod="openstack/placement-db-create-q8jfj" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.870226 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-jfhp7"] Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.879602 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmjxf\" (UniqueName: \"kubernetes.io/projected/42f1fb22-05ce-49c4-b0fb-fd0c98a81216-kube-api-access-nmjxf\") pod \"placement-18b0-account-create-bnvc5\" (UID: \"42f1fb22-05ce-49c4-b0fb-fd0c98a81216\") " pod="openstack/placement-18b0-account-create-bnvc5" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.883515 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5tlz\" (UniqueName: \"kubernetes.io/projected/7be16145-50e5-4de7-99c1-ad67f473f16f-kube-api-access-p5tlz\") pod \"placement-db-create-q8jfj\" (UID: \"7be16145-50e5-4de7-99c1-ad67f473f16f\") " pod="openstack/placement-db-create-q8jfj" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.885251 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jfhp7"] Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.885372 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jfhp7" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.923793 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q8jfj" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.962285 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9e93-account-create-59bwv"] Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.963318 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9e93-account-create-59bwv" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.966014 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Nov 25 15:12:38 crc kubenswrapper[4731]: I1125 15:12:38.971609 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9e93-account-create-59bwv"] Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.010851 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-18b0-account-create-bnvc5" Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.022553 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-764s2\" (UniqueName: \"kubernetes.io/projected/0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a-kube-api-access-764s2\") pod \"glance-db-create-jfhp7\" (UID: \"0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a\") " pod="openstack/glance-db-create-jfhp7" Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.022637 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a-operator-scripts\") pod \"glance-db-create-jfhp7\" (UID: \"0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a\") " pod="openstack/glance-db-create-jfhp7" Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.123639 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/369556c5-9264-481b-96e6-070055cab1ed-operator-scripts\") pod \"glance-9e93-account-create-59bwv\" (UID: \"369556c5-9264-481b-96e6-070055cab1ed\") " pod="openstack/glance-9e93-account-create-59bwv" Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.123705 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlkwg\" (UniqueName: \"kubernetes.io/projected/369556c5-9264-481b-96e6-070055cab1ed-kube-api-access-vlkwg\") pod \"glance-9e93-account-create-59bwv\" (UID: \"369556c5-9264-481b-96e6-070055cab1ed\") " pod="openstack/glance-9e93-account-create-59bwv" Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.123858 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-764s2\" (UniqueName: \"kubernetes.io/projected/0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a-kube-api-access-764s2\") pod \"glance-db-create-jfhp7\" (UID: \"0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a\") " pod="openstack/glance-db-create-jfhp7" Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.123950 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a-operator-scripts\") pod \"glance-db-create-jfhp7\" (UID: \"0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a\") " pod="openstack/glance-db-create-jfhp7" Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.124620 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a-operator-scripts\") pod \"glance-db-create-jfhp7\" (UID: \"0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a\") " pod="openstack/glance-db-create-jfhp7" Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.141926 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-764s2\" (UniqueName: \"kubernetes.io/projected/0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a-kube-api-access-764s2\") pod \"glance-db-create-jfhp7\" (UID: \"0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a\") " pod="openstack/glance-db-create-jfhp7" Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.225520 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/369556c5-9264-481b-96e6-070055cab1ed-operator-scripts\") pod \"glance-9e93-account-create-59bwv\" (UID: \"369556c5-9264-481b-96e6-070055cab1ed\") " pod="openstack/glance-9e93-account-create-59bwv" Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.225640 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlkwg\" (UniqueName: \"kubernetes.io/projected/369556c5-9264-481b-96e6-070055cab1ed-kube-api-access-vlkwg\") pod \"glance-9e93-account-create-59bwv\" (UID: \"369556c5-9264-481b-96e6-070055cab1ed\") " pod="openstack/glance-9e93-account-create-59bwv" Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.227042 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/369556c5-9264-481b-96e6-070055cab1ed-operator-scripts\") pod \"glance-9e93-account-create-59bwv\" (UID: \"369556c5-9264-481b-96e6-070055cab1ed\") " pod="openstack/glance-9e93-account-create-59bwv" Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.244003 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlkwg\" (UniqueName: \"kubernetes.io/projected/369556c5-9264-481b-96e6-070055cab1ed-kube-api-access-vlkwg\") pod \"glance-9e93-account-create-59bwv\" (UID: \"369556c5-9264-481b-96e6-070055cab1ed\") " pod="openstack/glance-9e93-account-create-59bwv" Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.266054 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jfhp7" Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.286401 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9e93-account-create-59bwv" Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.328372 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-n2zrw"] Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.336349 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7e51-account-create-2kjwl"] Nov 25 15:12:39 crc kubenswrapper[4731]: W1125 15:12:39.337121 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode05a140c_c361_4e2a_9d20_822743eeceba.slice/crio-c50f7ed8cad0d6c37f32e8fd16141e2a9b7b926278fe8fe42ce8b49f0025d0e6 WatchSource:0}: Error finding container c50f7ed8cad0d6c37f32e8fd16141e2a9b7b926278fe8fe42ce8b49f0025d0e6: Status 404 returned error can't find the container with id c50f7ed8cad0d6c37f32e8fd16141e2a9b7b926278fe8fe42ce8b49f0025d0e6 Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.440922 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-q8jfj"] Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.588569 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-18b0-account-create-bnvc5"] Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.749376 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jfhp7"] Nov 25 15:12:39 crc kubenswrapper[4731]: W1125 15:12:39.809107 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0866bfc0_d1ef_4c65_bf79_71e9f9bbd07a.slice/crio-413b854a6cfc3cb990e66c73cd71829fed0948142ed2b887e5a77294ca9ad0d1 WatchSource:0}: Error finding container 413b854a6cfc3cb990e66c73cd71829fed0948142ed2b887e5a77294ca9ad0d1: Status 404 returned error can't find the container with id 413b854a6cfc3cb990e66c73cd71829fed0948142ed2b887e5a77294ca9ad0d1 Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.861254 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9e93-account-create-59bwv"] Nov 25 15:12:39 crc kubenswrapper[4731]: I1125 15:12:39.940036 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:39 crc kubenswrapper[4731]: E1125 15:12:39.940228 4731 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 15:12:39 crc kubenswrapper[4731]: E1125 15:12:39.940275 4731 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 15:12:39 crc kubenswrapper[4731]: E1125 15:12:39.940338 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift podName:7e07b944-1135-4cda-ae5f-6c387117d5bd nodeName:}" failed. No retries permitted until 2025-11-25 15:12:47.940320557 +0000 UTC m=+1126.687454987 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift") pod "swift-storage-0" (UID: "7e07b944-1135-4cda-ae5f-6c387117d5bd") : configmap "swift-ring-files" not found Nov 25 15:12:40 crc kubenswrapper[4731]: I1125 15:12:40.338525 4731 generic.go:334] "Generic (PLEG): container finished" podID="7be16145-50e5-4de7-99c1-ad67f473f16f" containerID="c7d34b035a1fc1a946a3cd01625940867208e09b063ee8f7beff5580c148f214" exitCode=0 Nov 25 15:12:40 crc kubenswrapper[4731]: I1125 15:12:40.338620 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-q8jfj" event={"ID":"7be16145-50e5-4de7-99c1-ad67f473f16f","Type":"ContainerDied","Data":"c7d34b035a1fc1a946a3cd01625940867208e09b063ee8f7beff5580c148f214"} Nov 25 15:12:40 crc kubenswrapper[4731]: I1125 15:12:40.338648 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-q8jfj" event={"ID":"7be16145-50e5-4de7-99c1-ad67f473f16f","Type":"ContainerStarted","Data":"b199d415ae7bd385f99e9f15c3fa1ef197773ad8e75d15d93ec36818d82d0cbc"} Nov 25 15:12:40 crc kubenswrapper[4731]: I1125 15:12:40.340973 4731 generic.go:334] "Generic (PLEG): container finished" podID="e05a140c-c361-4e2a-9d20-822743eeceba" containerID="8bb4ca1679e23a7e4e3d45fddde41509b469cbf651a2b270c0de76ccf753d7d6" exitCode=0 Nov 25 15:12:40 crc kubenswrapper[4731]: I1125 15:12:40.341050 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7e51-account-create-2kjwl" event={"ID":"e05a140c-c361-4e2a-9d20-822743eeceba","Type":"ContainerDied","Data":"8bb4ca1679e23a7e4e3d45fddde41509b469cbf651a2b270c0de76ccf753d7d6"} Nov 25 15:12:40 crc kubenswrapper[4731]: I1125 15:12:40.341083 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7e51-account-create-2kjwl" event={"ID":"e05a140c-c361-4e2a-9d20-822743eeceba","Type":"ContainerStarted","Data":"c50f7ed8cad0d6c37f32e8fd16141e2a9b7b926278fe8fe42ce8b49f0025d0e6"} Nov 25 15:12:40 crc kubenswrapper[4731]: I1125 15:12:40.343388 4731 generic.go:334] "Generic (PLEG): container finished" podID="4092bd55-8a71-4286-a6e3-eca18e3ced0b" containerID="ddeacfa616d8ace80553f5884cfc8d1f41fdf7be91cedaffbd498c83d100e72e" exitCode=0 Nov 25 15:12:40 crc kubenswrapper[4731]: I1125 15:12:40.343452 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n2zrw" event={"ID":"4092bd55-8a71-4286-a6e3-eca18e3ced0b","Type":"ContainerDied","Data":"ddeacfa616d8ace80553f5884cfc8d1f41fdf7be91cedaffbd498c83d100e72e"} Nov 25 15:12:40 crc kubenswrapper[4731]: I1125 15:12:40.343476 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n2zrw" event={"ID":"4092bd55-8a71-4286-a6e3-eca18e3ced0b","Type":"ContainerStarted","Data":"553a82ae796fc5f40513a1f5b15e14de2f006c8f37bd9cabb3a044a30b23199b"} Nov 25 15:12:40 crc kubenswrapper[4731]: I1125 15:12:40.344924 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jfhp7" event={"ID":"0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a","Type":"ContainerStarted","Data":"413b854a6cfc3cb990e66c73cd71829fed0948142ed2b887e5a77294ca9ad0d1"} Nov 25 15:12:40 crc kubenswrapper[4731]: I1125 15:12:40.346528 4731 generic.go:334] "Generic (PLEG): container finished" podID="42f1fb22-05ce-49c4-b0fb-fd0c98a81216" containerID="e2d41f7e2f2a0dd6bb0f3f0e9ff758834f6faab13ab36dc0e3829e16c92f10b8" exitCode=0 Nov 25 15:12:40 crc kubenswrapper[4731]: I1125 15:12:40.346591 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-18b0-account-create-bnvc5" event={"ID":"42f1fb22-05ce-49c4-b0fb-fd0c98a81216","Type":"ContainerDied","Data":"e2d41f7e2f2a0dd6bb0f3f0e9ff758834f6faab13ab36dc0e3829e16c92f10b8"} Nov 25 15:12:40 crc kubenswrapper[4731]: I1125 15:12:40.346610 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-18b0-account-create-bnvc5" event={"ID":"42f1fb22-05ce-49c4-b0fb-fd0c98a81216","Type":"ContainerStarted","Data":"9f5c2074a877f16c4c0e987b54df30a4c51c8ff68a069112a6a1671e6e93ccb3"} Nov 25 15:12:41 crc kubenswrapper[4731]: I1125 15:12:41.256054 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:12:41 crc kubenswrapper[4731]: I1125 15:12:41.336249 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-92mmw"] Nov 25 15:12:41 crc kubenswrapper[4731]: I1125 15:12:41.337663 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-92mmw" podUID="e744e850-243e-41a6-a0a7-564ce5a2c4ba" containerName="dnsmasq-dns" containerID="cri-o://74ef5049b547c2aff5288789bb966aa69b75cc19cbd4b5604e632069a1e9d754" gracePeriod=10 Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.025854 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-92mmw" podUID="e744e850-243e-41a6-a0a7-564ce5a2c4ba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.109:5353: connect: connection refused" Nov 25 15:12:42 crc kubenswrapper[4731]: W1125 15:12:42.051193 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod369556c5_9264_481b_96e6_070055cab1ed.slice/crio-c9310d51dfbb25b15c8fb01eb3812de5d063cd9a8251c0f98d10cce151b185b5 WatchSource:0}: Error finding container c9310d51dfbb25b15c8fb01eb3812de5d063cd9a8251c0f98d10cce151b185b5: Status 404 returned error can't find the container with id c9310d51dfbb25b15c8fb01eb3812de5d063cd9a8251c0f98d10cce151b185b5 Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.151644 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7e51-account-create-2kjwl" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.272675 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.284787 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4rt9\" (UniqueName: \"kubernetes.io/projected/e05a140c-c361-4e2a-9d20-822743eeceba-kube-api-access-k4rt9\") pod \"e05a140c-c361-4e2a-9d20-822743eeceba\" (UID: \"e05a140c-c361-4e2a-9d20-822743eeceba\") " Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.285194 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05a140c-c361-4e2a-9d20-822743eeceba-operator-scripts\") pod \"e05a140c-c361-4e2a-9d20-822743eeceba\" (UID: \"e05a140c-c361-4e2a-9d20-822743eeceba\") " Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.286236 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e05a140c-c361-4e2a-9d20-822743eeceba-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e05a140c-c361-4e2a-9d20-822743eeceba" (UID: "e05a140c-c361-4e2a-9d20-822743eeceba"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.289515 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e05a140c-c361-4e2a-9d20-822743eeceba-kube-api-access-k4rt9" (OuterVolumeSpecName: "kube-api-access-k4rt9") pod "e05a140c-c361-4e2a-9d20-822743eeceba" (UID: "e05a140c-c361-4e2a-9d20-822743eeceba"). InnerVolumeSpecName "kube-api-access-k4rt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.355313 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q8jfj" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.384495 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-18b0-account-create-bnvc5" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.388477 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n2zrw" event={"ID":"4092bd55-8a71-4286-a6e3-eca18e3ced0b","Type":"ContainerDied","Data":"553a82ae796fc5f40513a1f5b15e14de2f006c8f37bd9cabb3a044a30b23199b"} Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.388521 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="553a82ae796fc5f40513a1f5b15e14de2f006c8f37bd9cabb3a044a30b23199b" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.388684 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4rt9\" (UniqueName: \"kubernetes.io/projected/e05a140c-c361-4e2a-9d20-822743eeceba-kube-api-access-k4rt9\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.389752 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e05a140c-c361-4e2a-9d20-822743eeceba-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.394784 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9e93-account-create-59bwv" event={"ID":"369556c5-9264-481b-96e6-070055cab1ed","Type":"ContainerStarted","Data":"c9310d51dfbb25b15c8fb01eb3812de5d063cd9a8251c0f98d10cce151b185b5"} Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.397916 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n2zrw" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.403630 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-18b0-account-create-bnvc5" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.404048 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-18b0-account-create-bnvc5" event={"ID":"42f1fb22-05ce-49c4-b0fb-fd0c98a81216","Type":"ContainerDied","Data":"9f5c2074a877f16c4c0e987b54df30a4c51c8ff68a069112a6a1671e6e93ccb3"} Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.404083 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f5c2074a877f16c4c0e987b54df30a4c51c8ff68a069112a6a1671e6e93ccb3" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.415670 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-q8jfj" event={"ID":"7be16145-50e5-4de7-99c1-ad67f473f16f","Type":"ContainerDied","Data":"b199d415ae7bd385f99e9f15c3fa1ef197773ad8e75d15d93ec36818d82d0cbc"} Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.415716 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b199d415ae7bd385f99e9f15c3fa1ef197773ad8e75d15d93ec36818d82d0cbc" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.415792 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-q8jfj" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.424619 4731 generic.go:334] "Generic (PLEG): container finished" podID="e744e850-243e-41a6-a0a7-564ce5a2c4ba" containerID="74ef5049b547c2aff5288789bb966aa69b75cc19cbd4b5604e632069a1e9d754" exitCode=0 Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.424695 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-92mmw" event={"ID":"e744e850-243e-41a6-a0a7-564ce5a2c4ba","Type":"ContainerDied","Data":"74ef5049b547c2aff5288789bb966aa69b75cc19cbd4b5604e632069a1e9d754"} Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.424730 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-92mmw" event={"ID":"e744e850-243e-41a6-a0a7-564ce5a2c4ba","Type":"ContainerDied","Data":"4fa4dc23b3185fcaebe625739e40c20d544914689f0dca4f535b781ab440dd4b"} Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.424762 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fa4dc23b3185fcaebe625739e40c20d544914689f0dca4f535b781ab440dd4b" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.424777 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.428238 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7e51-account-create-2kjwl" event={"ID":"e05a140c-c361-4e2a-9d20-822743eeceba","Type":"ContainerDied","Data":"c50f7ed8cad0d6c37f32e8fd16141e2a9b7b926278fe8fe42ce8b49f0025d0e6"} Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.428265 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c50f7ed8cad0d6c37f32e8fd16141e2a9b7b926278fe8fe42ce8b49f0025d0e6" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.428296 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7e51-account-create-2kjwl" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.490562 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-ovsdbserver-sb\") pod \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.490645 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be16145-50e5-4de7-99c1-ad67f473f16f-operator-scripts\") pod \"7be16145-50e5-4de7-99c1-ad67f473f16f\" (UID: \"7be16145-50e5-4de7-99c1-ad67f473f16f\") " Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.490755 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42f1fb22-05ce-49c4-b0fb-fd0c98a81216-operator-scripts\") pod \"42f1fb22-05ce-49c4-b0fb-fd0c98a81216\" (UID: \"42f1fb22-05ce-49c4-b0fb-fd0c98a81216\") " Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.490895 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmjxf\" (UniqueName: \"kubernetes.io/projected/42f1fb22-05ce-49c4-b0fb-fd0c98a81216-kube-api-access-nmjxf\") pod \"42f1fb22-05ce-49c4-b0fb-fd0c98a81216\" (UID: \"42f1fb22-05ce-49c4-b0fb-fd0c98a81216\") " Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.490973 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4092bd55-8a71-4286-a6e3-eca18e3ced0b-operator-scripts\") pod \"4092bd55-8a71-4286-a6e3-eca18e3ced0b\" (UID: \"4092bd55-8a71-4286-a6e3-eca18e3ced0b\") " Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.491033 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fr7dw\" (UniqueName: \"kubernetes.io/projected/e744e850-243e-41a6-a0a7-564ce5a2c4ba-kube-api-access-fr7dw\") pod \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.491075 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5tlz\" (UniqueName: \"kubernetes.io/projected/7be16145-50e5-4de7-99c1-ad67f473f16f-kube-api-access-p5tlz\") pod \"7be16145-50e5-4de7-99c1-ad67f473f16f\" (UID: \"7be16145-50e5-4de7-99c1-ad67f473f16f\") " Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.491118 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-khdnj\" (UniqueName: \"kubernetes.io/projected/4092bd55-8a71-4286-a6e3-eca18e3ced0b-kube-api-access-khdnj\") pod \"4092bd55-8a71-4286-a6e3-eca18e3ced0b\" (UID: \"4092bd55-8a71-4286-a6e3-eca18e3ced0b\") " Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.491232 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-ovsdbserver-nb\") pod \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.491311 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-config\") pod \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.491428 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-dns-svc\") pod \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\" (UID: \"e744e850-243e-41a6-a0a7-564ce5a2c4ba\") " Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.493930 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7be16145-50e5-4de7-99c1-ad67f473f16f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7be16145-50e5-4de7-99c1-ad67f473f16f" (UID: "7be16145-50e5-4de7-99c1-ad67f473f16f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.494980 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42f1fb22-05ce-49c4-b0fb-fd0c98a81216-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "42f1fb22-05ce-49c4-b0fb-fd0c98a81216" (UID: "42f1fb22-05ce-49c4-b0fb-fd0c98a81216"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.495076 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4092bd55-8a71-4286-a6e3-eca18e3ced0b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4092bd55-8a71-4286-a6e3-eca18e3ced0b" (UID: "4092bd55-8a71-4286-a6e3-eca18e3ced0b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.496110 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be16145-50e5-4de7-99c1-ad67f473f16f-kube-api-access-p5tlz" (OuterVolumeSpecName: "kube-api-access-p5tlz") pod "7be16145-50e5-4de7-99c1-ad67f473f16f" (UID: "7be16145-50e5-4de7-99c1-ad67f473f16f"). InnerVolumeSpecName "kube-api-access-p5tlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.498223 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e744e850-243e-41a6-a0a7-564ce5a2c4ba-kube-api-access-fr7dw" (OuterVolumeSpecName: "kube-api-access-fr7dw") pod "e744e850-243e-41a6-a0a7-564ce5a2c4ba" (UID: "e744e850-243e-41a6-a0a7-564ce5a2c4ba"). InnerVolumeSpecName "kube-api-access-fr7dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.501004 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4092bd55-8a71-4286-a6e3-eca18e3ced0b-kube-api-access-khdnj" (OuterVolumeSpecName: "kube-api-access-khdnj") pod "4092bd55-8a71-4286-a6e3-eca18e3ced0b" (UID: "4092bd55-8a71-4286-a6e3-eca18e3ced0b"). InnerVolumeSpecName "kube-api-access-khdnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.504517 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42f1fb22-05ce-49c4-b0fb-fd0c98a81216-kube-api-access-nmjxf" (OuterVolumeSpecName: "kube-api-access-nmjxf") pod "42f1fb22-05ce-49c4-b0fb-fd0c98a81216" (UID: "42f1fb22-05ce-49c4-b0fb-fd0c98a81216"). InnerVolumeSpecName "kube-api-access-nmjxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.535353 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e744e850-243e-41a6-a0a7-564ce5a2c4ba" (UID: "e744e850-243e-41a6-a0a7-564ce5a2c4ba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.539477 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e744e850-243e-41a6-a0a7-564ce5a2c4ba" (UID: "e744e850-243e-41a6-a0a7-564ce5a2c4ba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.545753 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e744e850-243e-41a6-a0a7-564ce5a2c4ba" (UID: "e744e850-243e-41a6-a0a7-564ce5a2c4ba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.553294 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-config" (OuterVolumeSpecName: "config") pod "e744e850-243e-41a6-a0a7-564ce5a2c4ba" (UID: "e744e850-243e-41a6-a0a7-564ce5a2c4ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.593614 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-khdnj\" (UniqueName: \"kubernetes.io/projected/4092bd55-8a71-4286-a6e3-eca18e3ced0b-kube-api-access-khdnj\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.593656 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.593670 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.593683 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.593699 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e744e850-243e-41a6-a0a7-564ce5a2c4ba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.593716 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7be16145-50e5-4de7-99c1-ad67f473f16f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.593730 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/42f1fb22-05ce-49c4-b0fb-fd0c98a81216-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.593742 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmjxf\" (UniqueName: \"kubernetes.io/projected/42f1fb22-05ce-49c4-b0fb-fd0c98a81216-kube-api-access-nmjxf\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.593757 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4092bd55-8a71-4286-a6e3-eca18e3ced0b-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.593768 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fr7dw\" (UniqueName: \"kubernetes.io/projected/e744e850-243e-41a6-a0a7-564ce5a2c4ba-kube-api-access-fr7dw\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:42 crc kubenswrapper[4731]: I1125 15:12:42.593780 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5tlz\" (UniqueName: \"kubernetes.io/projected/7be16145-50e5-4de7-99c1-ad67f473f16f-kube-api-access-p5tlz\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:43 crc kubenswrapper[4731]: I1125 15:12:43.442121 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-br7gv" event={"ID":"6528b54d-5bbd-4f24-9041-e1dd5fef70c7","Type":"ContainerStarted","Data":"aefa868aa27736d11333458601739aecb61bc493556f299f470885a28ea9a345"} Nov 25 15:12:43 crc kubenswrapper[4731]: I1125 15:12:43.448988 4731 generic.go:334] "Generic (PLEG): container finished" podID="0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a" containerID="3764b418ade0428b5f39adf864a0c0fb1bd447c7baf88582ea2a1dc0d24e60d1" exitCode=0 Nov 25 15:12:43 crc kubenswrapper[4731]: I1125 15:12:43.449047 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jfhp7" event={"ID":"0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a","Type":"ContainerDied","Data":"3764b418ade0428b5f39adf864a0c0fb1bd447c7baf88582ea2a1dc0d24e60d1"} Nov 25 15:12:43 crc kubenswrapper[4731]: I1125 15:12:43.452499 4731 generic.go:334] "Generic (PLEG): container finished" podID="369556c5-9264-481b-96e6-070055cab1ed" containerID="99aff11bb58e4b1e1cbcbf3ae6ac7ae2472ac342fa26ed8dccc5e021b0448d36" exitCode=0 Nov 25 15:12:43 crc kubenswrapper[4731]: I1125 15:12:43.452596 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n2zrw" Nov 25 15:12:43 crc kubenswrapper[4731]: I1125 15:12:43.453338 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9e93-account-create-59bwv" event={"ID":"369556c5-9264-481b-96e6-070055cab1ed","Type":"ContainerDied","Data":"99aff11bb58e4b1e1cbcbf3ae6ac7ae2472ac342fa26ed8dccc5e021b0448d36"} Nov 25 15:12:43 crc kubenswrapper[4731]: I1125 15:12:43.453404 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-92mmw" Nov 25 15:12:43 crc kubenswrapper[4731]: I1125 15:12:43.472522 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-br7gv" podStartSLOduration=2.6582518459999998 podStartE2EDuration="11.472507685s" podCreationTimestamp="2025-11-25 15:12:32 +0000 UTC" firstStartedPulling="2025-11-25 15:12:33.322795331 +0000 UTC m=+1112.069929761" lastFinishedPulling="2025-11-25 15:12:42.13705117 +0000 UTC m=+1120.884185600" observedRunningTime="2025-11-25 15:12:43.470376901 +0000 UTC m=+1122.217511341" watchObservedRunningTime="2025-11-25 15:12:43.472507685 +0000 UTC m=+1122.219642115" Nov 25 15:12:43 crc kubenswrapper[4731]: I1125 15:12:43.508328 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-92mmw"] Nov 25 15:12:43 crc kubenswrapper[4731]: I1125 15:12:43.516396 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-92mmw"] Nov 25 15:12:44 crc kubenswrapper[4731]: I1125 15:12:44.825003 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e744e850-243e-41a6-a0a7-564ce5a2c4ba" path="/var/lib/kubelet/pods/e744e850-243e-41a6-a0a7-564ce5a2c4ba/volumes" Nov 25 15:12:44 crc kubenswrapper[4731]: I1125 15:12:44.891224 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9e93-account-create-59bwv" Nov 25 15:12:44 crc kubenswrapper[4731]: I1125 15:12:44.894083 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jfhp7" Nov 25 15:12:44 crc kubenswrapper[4731]: I1125 15:12:44.934098 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-764s2\" (UniqueName: \"kubernetes.io/projected/0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a-kube-api-access-764s2\") pod \"0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a\" (UID: \"0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a\") " Nov 25 15:12:44 crc kubenswrapper[4731]: I1125 15:12:44.934258 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vlkwg\" (UniqueName: \"kubernetes.io/projected/369556c5-9264-481b-96e6-070055cab1ed-kube-api-access-vlkwg\") pod \"369556c5-9264-481b-96e6-070055cab1ed\" (UID: \"369556c5-9264-481b-96e6-070055cab1ed\") " Nov 25 15:12:44 crc kubenswrapper[4731]: I1125 15:12:44.934358 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a-operator-scripts\") pod \"0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a\" (UID: \"0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a\") " Nov 25 15:12:44 crc kubenswrapper[4731]: I1125 15:12:44.934555 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/369556c5-9264-481b-96e6-070055cab1ed-operator-scripts\") pod \"369556c5-9264-481b-96e6-070055cab1ed\" (UID: \"369556c5-9264-481b-96e6-070055cab1ed\") " Nov 25 15:12:44 crc kubenswrapper[4731]: I1125 15:12:44.935304 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a" (UID: "0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:44 crc kubenswrapper[4731]: I1125 15:12:44.935366 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/369556c5-9264-481b-96e6-070055cab1ed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "369556c5-9264-481b-96e6-070055cab1ed" (UID: "369556c5-9264-481b-96e6-070055cab1ed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:44 crc kubenswrapper[4731]: I1125 15:12:44.936131 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/369556c5-9264-481b-96e6-070055cab1ed-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:44 crc kubenswrapper[4731]: I1125 15:12:44.936164 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:44 crc kubenswrapper[4731]: I1125 15:12:44.940354 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/369556c5-9264-481b-96e6-070055cab1ed-kube-api-access-vlkwg" (OuterVolumeSpecName: "kube-api-access-vlkwg") pod "369556c5-9264-481b-96e6-070055cab1ed" (UID: "369556c5-9264-481b-96e6-070055cab1ed"). InnerVolumeSpecName "kube-api-access-vlkwg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:12:44 crc kubenswrapper[4731]: I1125 15:12:44.941851 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a-kube-api-access-764s2" (OuterVolumeSpecName: "kube-api-access-764s2") pod "0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a" (UID: "0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a"). InnerVolumeSpecName "kube-api-access-764s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:12:45 crc kubenswrapper[4731]: I1125 15:12:45.038116 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vlkwg\" (UniqueName: \"kubernetes.io/projected/369556c5-9264-481b-96e6-070055cab1ed-kube-api-access-vlkwg\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:45 crc kubenswrapper[4731]: I1125 15:12:45.038178 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-764s2\" (UniqueName: \"kubernetes.io/projected/0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a-kube-api-access-764s2\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:45 crc kubenswrapper[4731]: I1125 15:12:45.474924 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jfhp7" event={"ID":"0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a","Type":"ContainerDied","Data":"413b854a6cfc3cb990e66c73cd71829fed0948142ed2b887e5a77294ca9ad0d1"} Nov 25 15:12:45 crc kubenswrapper[4731]: I1125 15:12:45.474973 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="413b854a6cfc3cb990e66c73cd71829fed0948142ed2b887e5a77294ca9ad0d1" Nov 25 15:12:45 crc kubenswrapper[4731]: I1125 15:12:45.474988 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jfhp7" Nov 25 15:12:45 crc kubenswrapper[4731]: I1125 15:12:45.477020 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9e93-account-create-59bwv" event={"ID":"369556c5-9264-481b-96e6-070055cab1ed","Type":"ContainerDied","Data":"c9310d51dfbb25b15c8fb01eb3812de5d063cd9a8251c0f98d10cce151b185b5"} Nov 25 15:12:45 crc kubenswrapper[4731]: I1125 15:12:45.477048 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9310d51dfbb25b15c8fb01eb3812de5d063cd9a8251c0f98d10cce151b185b5" Nov 25 15:12:45 crc kubenswrapper[4731]: I1125 15:12:45.477070 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9e93-account-create-59bwv" Nov 25 15:12:47 crc kubenswrapper[4731]: E1125 15:12:47.989586 4731 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Nov 25 15:12:47 crc kubenswrapper[4731]: E1125 15:12:47.990000 4731 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Nov 25 15:12:47 crc kubenswrapper[4731]: E1125 15:12:47.990094 4731 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift podName:7e07b944-1135-4cda-ae5f-6c387117d5bd nodeName:}" failed. No retries permitted until 2025-11-25 15:13:03.990063552 +0000 UTC m=+1142.737198022 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift") pod "swift-storage-0" (UID: "7e07b944-1135-4cda-ae5f-6c387117d5bd") : configmap "swift-ring-files" not found Nov 25 15:12:47 crc kubenswrapper[4731]: I1125 15:12:47.990097 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.121708 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-jmh97"] Nov 25 15:12:49 crc kubenswrapper[4731]: E1125 15:12:49.122121 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4092bd55-8a71-4286-a6e3-eca18e3ced0b" containerName="mariadb-database-create" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.122143 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4092bd55-8a71-4286-a6e3-eca18e3ced0b" containerName="mariadb-database-create" Nov 25 15:12:49 crc kubenswrapper[4731]: E1125 15:12:49.122152 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e744e850-243e-41a6-a0a7-564ce5a2c4ba" containerName="dnsmasq-dns" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.122159 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e744e850-243e-41a6-a0a7-564ce5a2c4ba" containerName="dnsmasq-dns" Nov 25 15:12:49 crc kubenswrapper[4731]: E1125 15:12:49.122169 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a" containerName="mariadb-database-create" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.122177 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a" containerName="mariadb-database-create" Nov 25 15:12:49 crc kubenswrapper[4731]: E1125 15:12:49.122190 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e744e850-243e-41a6-a0a7-564ce5a2c4ba" containerName="init" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.122197 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e744e850-243e-41a6-a0a7-564ce5a2c4ba" containerName="init" Nov 25 15:12:49 crc kubenswrapper[4731]: E1125 15:12:49.122211 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be16145-50e5-4de7-99c1-ad67f473f16f" containerName="mariadb-database-create" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.122218 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be16145-50e5-4de7-99c1-ad67f473f16f" containerName="mariadb-database-create" Nov 25 15:12:49 crc kubenswrapper[4731]: E1125 15:12:49.122230 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="369556c5-9264-481b-96e6-070055cab1ed" containerName="mariadb-account-create" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.122237 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="369556c5-9264-481b-96e6-070055cab1ed" containerName="mariadb-account-create" Nov 25 15:12:49 crc kubenswrapper[4731]: E1125 15:12:49.122255 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42f1fb22-05ce-49c4-b0fb-fd0c98a81216" containerName="mariadb-account-create" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.122262 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="42f1fb22-05ce-49c4-b0fb-fd0c98a81216" containerName="mariadb-account-create" Nov 25 15:12:49 crc kubenswrapper[4731]: E1125 15:12:49.122277 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e05a140c-c361-4e2a-9d20-822743eeceba" containerName="mariadb-account-create" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.122285 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e05a140c-c361-4e2a-9d20-822743eeceba" containerName="mariadb-account-create" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.122447 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a" containerName="mariadb-database-create" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.122460 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="42f1fb22-05ce-49c4-b0fb-fd0c98a81216" containerName="mariadb-account-create" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.122468 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="4092bd55-8a71-4286-a6e3-eca18e3ced0b" containerName="mariadb-database-create" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.122480 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e744e850-243e-41a6-a0a7-564ce5a2c4ba" containerName="dnsmasq-dns" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.122490 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="369556c5-9264-481b-96e6-070055cab1ed" containerName="mariadb-account-create" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.122576 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e05a140c-c361-4e2a-9d20-822743eeceba" containerName="mariadb-account-create" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.122598 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be16145-50e5-4de7-99c1-ad67f473f16f" containerName="mariadb-database-create" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.123298 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jmh97" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.127241 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lxlch" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.127290 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.131761 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jmh97"] Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.204889 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-crvzg" podUID="5881193a-868d-4633-ae6d-0428aa755057" containerName="ovn-controller" probeResult="failure" output=< Nov 25 15:12:49 crc kubenswrapper[4731]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 15:12:49 crc kubenswrapper[4731]: > Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.211303 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-config-data\") pod \"glance-db-sync-jmh97\" (UID: \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\") " pod="openstack/glance-db-sync-jmh97" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.211353 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-combined-ca-bundle\") pod \"glance-db-sync-jmh97\" (UID: \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\") " pod="openstack/glance-db-sync-jmh97" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.211546 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r46xx\" (UniqueName: \"kubernetes.io/projected/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-kube-api-access-r46xx\") pod \"glance-db-sync-jmh97\" (UID: \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\") " pod="openstack/glance-db-sync-jmh97" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.211611 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-db-sync-config-data\") pod \"glance-db-sync-jmh97\" (UID: \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\") " pod="openstack/glance-db-sync-jmh97" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.313072 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r46xx\" (UniqueName: \"kubernetes.io/projected/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-kube-api-access-r46xx\") pod \"glance-db-sync-jmh97\" (UID: \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\") " pod="openstack/glance-db-sync-jmh97" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.313476 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-db-sync-config-data\") pod \"glance-db-sync-jmh97\" (UID: \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\") " pod="openstack/glance-db-sync-jmh97" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.313733 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-config-data\") pod \"glance-db-sync-jmh97\" (UID: \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\") " pod="openstack/glance-db-sync-jmh97" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.313806 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-combined-ca-bundle\") pod \"glance-db-sync-jmh97\" (UID: \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\") " pod="openstack/glance-db-sync-jmh97" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.320720 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-combined-ca-bundle\") pod \"glance-db-sync-jmh97\" (UID: \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\") " pod="openstack/glance-db-sync-jmh97" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.320774 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-config-data\") pod \"glance-db-sync-jmh97\" (UID: \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\") " pod="openstack/glance-db-sync-jmh97" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.322628 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-db-sync-config-data\") pod \"glance-db-sync-jmh97\" (UID: \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\") " pod="openstack/glance-db-sync-jmh97" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.331355 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r46xx\" (UniqueName: \"kubernetes.io/projected/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-kube-api-access-r46xx\") pod \"glance-db-sync-jmh97\" (UID: \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\") " pod="openstack/glance-db-sync-jmh97" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.447004 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jmh97" Nov 25 15:12:49 crc kubenswrapper[4731]: I1125 15:12:49.957906 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jmh97"] Nov 25 15:12:50 crc kubenswrapper[4731]: I1125 15:12:50.039850 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:12:50 crc kubenswrapper[4731]: I1125 15:12:50.039920 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:12:50 crc kubenswrapper[4731]: I1125 15:12:50.561464 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jmh97" event={"ID":"8402be71-b88d-4091-a1ca-f7ceb7f8ba97","Type":"ContainerStarted","Data":"65088962793e5be5157fdfb7dc01f052fe85d7ea87beed86983a846050423075"} Nov 25 15:12:50 crc kubenswrapper[4731]: I1125 15:12:50.563591 4731 generic.go:334] "Generic (PLEG): container finished" podID="6528b54d-5bbd-4f24-9041-e1dd5fef70c7" containerID="aefa868aa27736d11333458601739aecb61bc493556f299f470885a28ea9a345" exitCode=0 Nov 25 15:12:50 crc kubenswrapper[4731]: I1125 15:12:50.563655 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-br7gv" event={"ID":"6528b54d-5bbd-4f24-9041-e1dd5fef70c7","Type":"ContainerDied","Data":"aefa868aa27736d11333458601739aecb61bc493556f299f470885a28ea9a345"} Nov 25 15:12:51 crc kubenswrapper[4731]: I1125 15:12:51.886023 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:51 crc kubenswrapper[4731]: I1125 15:12:51.959901 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjchb\" (UniqueName: \"kubernetes.io/projected/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-kube-api-access-qjchb\") pod \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " Nov 25 15:12:51 crc kubenswrapper[4731]: I1125 15:12:51.959947 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-dispersionconf\") pod \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " Nov 25 15:12:51 crc kubenswrapper[4731]: I1125 15:12:51.959998 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-etc-swift\") pod \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " Nov 25 15:12:51 crc kubenswrapper[4731]: I1125 15:12:51.960037 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-scripts\") pod \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " Nov 25 15:12:51 crc kubenswrapper[4731]: I1125 15:12:51.960082 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-ring-data-devices\") pod \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " Nov 25 15:12:51 crc kubenswrapper[4731]: I1125 15:12:51.960160 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-combined-ca-bundle\") pod \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " Nov 25 15:12:51 crc kubenswrapper[4731]: I1125 15:12:51.960182 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-swiftconf\") pod \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\" (UID: \"6528b54d-5bbd-4f24-9041-e1dd5fef70c7\") " Nov 25 15:12:51 crc kubenswrapper[4731]: I1125 15:12:51.960802 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "6528b54d-5bbd-4f24-9041-e1dd5fef70c7" (UID: "6528b54d-5bbd-4f24-9041-e1dd5fef70c7"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:51 crc kubenswrapper[4731]: I1125 15:12:51.961600 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "6528b54d-5bbd-4f24-9041-e1dd5fef70c7" (UID: "6528b54d-5bbd-4f24-9041-e1dd5fef70c7"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:12:51 crc kubenswrapper[4731]: I1125 15:12:51.967298 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-kube-api-access-qjchb" (OuterVolumeSpecName: "kube-api-access-qjchb") pod "6528b54d-5bbd-4f24-9041-e1dd5fef70c7" (UID: "6528b54d-5bbd-4f24-9041-e1dd5fef70c7"). InnerVolumeSpecName "kube-api-access-qjchb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:12:51 crc kubenswrapper[4731]: I1125 15:12:51.967773 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "6528b54d-5bbd-4f24-9041-e1dd5fef70c7" (UID: "6528b54d-5bbd-4f24-9041-e1dd5fef70c7"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:12:51 crc kubenswrapper[4731]: I1125 15:12:51.979842 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-scripts" (OuterVolumeSpecName: "scripts") pod "6528b54d-5bbd-4f24-9041-e1dd5fef70c7" (UID: "6528b54d-5bbd-4f24-9041-e1dd5fef70c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:12:51 crc kubenswrapper[4731]: I1125 15:12:51.993180 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6528b54d-5bbd-4f24-9041-e1dd5fef70c7" (UID: "6528b54d-5bbd-4f24-9041-e1dd5fef70c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:12:52 crc kubenswrapper[4731]: I1125 15:12:52.017595 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "6528b54d-5bbd-4f24-9041-e1dd5fef70c7" (UID: "6528b54d-5bbd-4f24-9041-e1dd5fef70c7"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:12:52 crc kubenswrapper[4731]: I1125 15:12:52.062394 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjchb\" (UniqueName: \"kubernetes.io/projected/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-kube-api-access-qjchb\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:52 crc kubenswrapper[4731]: I1125 15:12:52.062684 4731 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-dispersionconf\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:52 crc kubenswrapper[4731]: I1125 15:12:52.062752 4731 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-etc-swift\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:52 crc kubenswrapper[4731]: I1125 15:12:52.062872 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:52 crc kubenswrapper[4731]: I1125 15:12:52.062947 4731 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-ring-data-devices\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:52 crc kubenswrapper[4731]: I1125 15:12:52.063018 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:52 crc kubenswrapper[4731]: I1125 15:12:52.063079 4731 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/6528b54d-5bbd-4f24-9041-e1dd5fef70c7-swiftconf\") on node \"crc\" DevicePath \"\"" Nov 25 15:12:52 crc kubenswrapper[4731]: I1125 15:12:52.579081 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-br7gv" event={"ID":"6528b54d-5bbd-4f24-9041-e1dd5fef70c7","Type":"ContainerDied","Data":"1db8cffd72939ef08cb726b3ee0134dc31ae71e3ee7591bf92a683ccbef97fe3"} Nov 25 15:12:52 crc kubenswrapper[4731]: I1125 15:12:52.579397 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1db8cffd72939ef08cb726b3ee0134dc31ae71e3ee7591bf92a683ccbef97fe3" Nov 25 15:12:52 crc kubenswrapper[4731]: I1125 15:12:52.579131 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-br7gv" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.200584 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-crvzg" podUID="5881193a-868d-4633-ae6d-0428aa755057" containerName="ovn-controller" probeResult="failure" output=< Nov 25 15:12:54 crc kubenswrapper[4731]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 15:12:54 crc kubenswrapper[4731]: > Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.209507 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.221010 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-w2gw5" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.431183 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-crvzg-config-j9lh4"] Nov 25 15:12:54 crc kubenswrapper[4731]: E1125 15:12:54.431598 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6528b54d-5bbd-4f24-9041-e1dd5fef70c7" containerName="swift-ring-rebalance" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.431622 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6528b54d-5bbd-4f24-9041-e1dd5fef70c7" containerName="swift-ring-rebalance" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.431867 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6528b54d-5bbd-4f24-9041-e1dd5fef70c7" containerName="swift-ring-rebalance" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.432509 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.434502 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.450784 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crvzg-config-j9lh4"] Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.521800 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-log-ovn\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.521956 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-run\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.522037 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6662879b-0dee-41fb-80ed-9cd584d3cdf6-additional-scripts\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.522306 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-run-ovn\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.522355 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbd5k\" (UniqueName: \"kubernetes.io/projected/6662879b-0dee-41fb-80ed-9cd584d3cdf6-kube-api-access-cbd5k\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.522408 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6662879b-0dee-41fb-80ed-9cd584d3cdf6-scripts\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.623833 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-log-ovn\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.623916 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-run\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.623952 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6662879b-0dee-41fb-80ed-9cd584d3cdf6-additional-scripts\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.624004 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbd5k\" (UniqueName: \"kubernetes.io/projected/6662879b-0dee-41fb-80ed-9cd584d3cdf6-kube-api-access-cbd5k\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.624210 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-run-ovn\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.624232 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6662879b-0dee-41fb-80ed-9cd584d3cdf6-scripts\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.624300 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-log-ovn\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.624445 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-run\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.624586 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-run-ovn\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.625246 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6662879b-0dee-41fb-80ed-9cd584d3cdf6-additional-scripts\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.626097 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6662879b-0dee-41fb-80ed-9cd584d3cdf6-scripts\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.643905 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbd5k\" (UniqueName: \"kubernetes.io/projected/6662879b-0dee-41fb-80ed-9cd584d3cdf6-kube-api-access-cbd5k\") pod \"ovn-controller-crvzg-config-j9lh4\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:54 crc kubenswrapper[4731]: I1125 15:12:54.800154 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:12:57 crc kubenswrapper[4731]: I1125 15:12:57.621567 4731 generic.go:334] "Generic (PLEG): container finished" podID="08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" containerID="749d7a65e9c7e0bf8844ce25e19f271ac52796eca22ae038cda0ede819d4c3f0" exitCode=0 Nov 25 15:12:57 crc kubenswrapper[4731]: I1125 15:12:57.621684 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8","Type":"ContainerDied","Data":"749d7a65e9c7e0bf8844ce25e19f271ac52796eca22ae038cda0ede819d4c3f0"} Nov 25 15:12:58 crc kubenswrapper[4731]: I1125 15:12:58.632398 4731 generic.go:334] "Generic (PLEG): container finished" podID="f9c970e0-c27c-4a82-afd5-97aacf7536f9" containerID="a801c820f5511e4c5db7b4c69c0cacb99eb6775c13be64ca3267bfeeb186738b" exitCode=0 Nov 25 15:12:58 crc kubenswrapper[4731]: I1125 15:12:58.632449 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f9c970e0-c27c-4a82-afd5-97aacf7536f9","Type":"ContainerDied","Data":"a801c820f5511e4c5db7b4c69c0cacb99eb6775c13be64ca3267bfeeb186738b"} Nov 25 15:12:59 crc kubenswrapper[4731]: I1125 15:12:59.204908 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-crvzg" podUID="5881193a-868d-4633-ae6d-0428aa755057" containerName="ovn-controller" probeResult="failure" output=< Nov 25 15:12:59 crc kubenswrapper[4731]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Nov 25 15:12:59 crc kubenswrapper[4731]: > Nov 25 15:13:02 crc kubenswrapper[4731]: I1125 15:13:02.674480 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8","Type":"ContainerStarted","Data":"cffa18d7573f9bd4c3eca77b02952116fe3df84f8087350296ea123736d44a08"} Nov 25 15:13:02 crc kubenswrapper[4731]: I1125 15:13:02.676348 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:13:02 crc kubenswrapper[4731]: I1125 15:13:02.701979 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.544488559 podStartE2EDuration="1m19.701964313s" podCreationTimestamp="2025-11-25 15:11:43 +0000 UTC" firstStartedPulling="2025-11-25 15:11:45.913941644 +0000 UTC m=+1064.661076074" lastFinishedPulling="2025-11-25 15:12:23.071417398 +0000 UTC m=+1101.818551828" observedRunningTime="2025-11-25 15:13:02.697792459 +0000 UTC m=+1141.444926899" watchObservedRunningTime="2025-11-25 15:13:02.701964313 +0000 UTC m=+1141.449098743" Nov 25 15:13:02 crc kubenswrapper[4731]: I1125 15:13:02.751827 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crvzg-config-j9lh4"] Nov 25 15:13:02 crc kubenswrapper[4731]: W1125 15:13:02.765461 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6662879b_0dee_41fb_80ed_9cd584d3cdf6.slice/crio-31a621e657bbeb4eade6c388d418a5d65a19e9c761163721c3962cea1e830b29 WatchSource:0}: Error finding container 31a621e657bbeb4eade6c388d418a5d65a19e9c761163721c3962cea1e830b29: Status 404 returned error can't find the container with id 31a621e657bbeb4eade6c388d418a5d65a19e9c761163721c3962cea1e830b29 Nov 25 15:13:03 crc kubenswrapper[4731]: I1125 15:13:03.686228 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f9c970e0-c27c-4a82-afd5-97aacf7536f9","Type":"ContainerStarted","Data":"be0f019b31f669736ec904676738a38ee89afb650c4b13cb677a2844987914b5"} Nov 25 15:13:03 crc kubenswrapper[4731]: I1125 15:13:03.687062 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 15:13:03 crc kubenswrapper[4731]: I1125 15:13:03.689253 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jmh97" event={"ID":"8402be71-b88d-4091-a1ca-f7ceb7f8ba97","Type":"ContainerStarted","Data":"c71c9409472172c011a8303a9ee45b954237837a31dbc5a812378a1b1008eb74"} Nov 25 15:13:03 crc kubenswrapper[4731]: I1125 15:13:03.690988 4731 generic.go:334] "Generic (PLEG): container finished" podID="6662879b-0dee-41fb-80ed-9cd584d3cdf6" containerID="2909c1e0a65b5c4a71509df9879d5e4d7f5c19a479a9c96f68a0c1731d774a53" exitCode=0 Nov 25 15:13:03 crc kubenswrapper[4731]: I1125 15:13:03.691050 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crvzg-config-j9lh4" event={"ID":"6662879b-0dee-41fb-80ed-9cd584d3cdf6","Type":"ContainerDied","Data":"2909c1e0a65b5c4a71509df9879d5e4d7f5c19a479a9c96f68a0c1731d774a53"} Nov 25 15:13:03 crc kubenswrapper[4731]: I1125 15:13:03.691131 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crvzg-config-j9lh4" event={"ID":"6662879b-0dee-41fb-80ed-9cd584d3cdf6","Type":"ContainerStarted","Data":"31a621e657bbeb4eade6c388d418a5d65a19e9c761163721c3962cea1e830b29"} Nov 25 15:13:03 crc kubenswrapper[4731]: I1125 15:13:03.718252 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=43.401911417 podStartE2EDuration="1m20.718227557s" podCreationTimestamp="2025-11-25 15:11:43 +0000 UTC" firstStartedPulling="2025-11-25 15:11:45.756027521 +0000 UTC m=+1064.503161951" lastFinishedPulling="2025-11-25 15:12:23.072343661 +0000 UTC m=+1101.819478091" observedRunningTime="2025-11-25 15:13:03.709514949 +0000 UTC m=+1142.456649389" watchObservedRunningTime="2025-11-25 15:13:03.718227557 +0000 UTC m=+1142.465361987" Nov 25 15:13:03 crc kubenswrapper[4731]: I1125 15:13:03.744263 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-jmh97" podStartSLOduration=2.1190426430000002 podStartE2EDuration="14.744233489s" podCreationTimestamp="2025-11-25 15:12:49 +0000 UTC" firstStartedPulling="2025-11-25 15:12:49.969392216 +0000 UTC m=+1128.716526646" lastFinishedPulling="2025-11-25 15:13:02.594583062 +0000 UTC m=+1141.341717492" observedRunningTime="2025-11-25 15:13:03.741435809 +0000 UTC m=+1142.488570259" watchObservedRunningTime="2025-11-25 15:13:03.744233489 +0000 UTC m=+1142.491367929" Nov 25 15:13:03 crc kubenswrapper[4731]: I1125 15:13:03.998810 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:13:04 crc kubenswrapper[4731]: I1125 15:13:04.024701 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/7e07b944-1135-4cda-ae5f-6c387117d5bd-etc-swift\") pod \"swift-storage-0\" (UID: \"7e07b944-1135-4cda-ae5f-6c387117d5bd\") " pod="openstack/swift-storage-0" Nov 25 15:13:04 crc kubenswrapper[4731]: I1125 15:13:04.166955 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Nov 25 15:13:04 crc kubenswrapper[4731]: I1125 15:13:04.217158 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-crvzg" Nov 25 15:13:04 crc kubenswrapper[4731]: I1125 15:13:04.768338 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.456229 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.533865 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6662879b-0dee-41fb-80ed-9cd584d3cdf6-additional-scripts\") pod \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.533939 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-log-ovn\") pod \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.534003 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-run\") pod \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.534029 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbd5k\" (UniqueName: \"kubernetes.io/projected/6662879b-0dee-41fb-80ed-9cd584d3cdf6-kube-api-access-cbd5k\") pod \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.534058 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6662879b-0dee-41fb-80ed-9cd584d3cdf6-scripts\") pod \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.534138 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-run-ovn\") pod \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\" (UID: \"6662879b-0dee-41fb-80ed-9cd584d3cdf6\") " Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.534596 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "6662879b-0dee-41fb-80ed-9cd584d3cdf6" (UID: "6662879b-0dee-41fb-80ed-9cd584d3cdf6"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.534784 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-run" (OuterVolumeSpecName: "var-run") pod "6662879b-0dee-41fb-80ed-9cd584d3cdf6" (UID: "6662879b-0dee-41fb-80ed-9cd584d3cdf6"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.534845 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "6662879b-0dee-41fb-80ed-9cd584d3cdf6" (UID: "6662879b-0dee-41fb-80ed-9cd584d3cdf6"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.535497 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6662879b-0dee-41fb-80ed-9cd584d3cdf6-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "6662879b-0dee-41fb-80ed-9cd584d3cdf6" (UID: "6662879b-0dee-41fb-80ed-9cd584d3cdf6"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.535700 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6662879b-0dee-41fb-80ed-9cd584d3cdf6-scripts" (OuterVolumeSpecName: "scripts") pod "6662879b-0dee-41fb-80ed-9cd584d3cdf6" (UID: "6662879b-0dee-41fb-80ed-9cd584d3cdf6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.541232 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6662879b-0dee-41fb-80ed-9cd584d3cdf6-kube-api-access-cbd5k" (OuterVolumeSpecName: "kube-api-access-cbd5k") pod "6662879b-0dee-41fb-80ed-9cd584d3cdf6" (UID: "6662879b-0dee-41fb-80ed-9cd584d3cdf6"). InnerVolumeSpecName "kube-api-access-cbd5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.635727 4731 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/6662879b-0dee-41fb-80ed-9cd584d3cdf6-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.635760 4731 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.635771 4731 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-run\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.635784 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbd5k\" (UniqueName: \"kubernetes.io/projected/6662879b-0dee-41fb-80ed-9cd584d3cdf6-kube-api-access-cbd5k\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.635794 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6662879b-0dee-41fb-80ed-9cd584d3cdf6-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.635804 4731 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/6662879b-0dee-41fb-80ed-9cd584d3cdf6-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.711578 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e07b944-1135-4cda-ae5f-6c387117d5bd","Type":"ContainerStarted","Data":"a850f601259c7943efafa402533184fb2cb3cdfb2f0769e19ef6552c6d7dcd42"} Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.713943 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crvzg-config-j9lh4" event={"ID":"6662879b-0dee-41fb-80ed-9cd584d3cdf6","Type":"ContainerDied","Data":"31a621e657bbeb4eade6c388d418a5d65a19e9c761163721c3962cea1e830b29"} Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.714004 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31a621e657bbeb4eade6c388d418a5d65a19e9c761163721c3962cea1e830b29" Nov 25 15:13:05 crc kubenswrapper[4731]: I1125 15:13:05.714101 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crvzg-config-j9lh4" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.566144 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-crvzg-config-j9lh4"] Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.577432 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-crvzg-config-j9lh4"] Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.687436 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-crvzg-config-zxntf"] Nov 25 15:13:06 crc kubenswrapper[4731]: E1125 15:13:06.687858 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6662879b-0dee-41fb-80ed-9cd584d3cdf6" containerName="ovn-config" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.687882 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6662879b-0dee-41fb-80ed-9cd584d3cdf6" containerName="ovn-config" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.688102 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6662879b-0dee-41fb-80ed-9cd584d3cdf6" containerName="ovn-config" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.688786 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.692202 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.707218 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crvzg-config-zxntf"] Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.753459 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14f6f887-65c2-4268-b393-c087414272b9-scripts\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.753638 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-run-ovn\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.753672 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-log-ovn\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.753695 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14f6f887-65c2-4268-b393-c087414272b9-additional-scripts\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.753745 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bc9f\" (UniqueName: \"kubernetes.io/projected/14f6f887-65c2-4268-b393-c087414272b9-kube-api-access-7bc9f\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.753789 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-run\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.829742 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6662879b-0dee-41fb-80ed-9cd584d3cdf6" path="/var/lib/kubelet/pods/6662879b-0dee-41fb-80ed-9cd584d3cdf6/volumes" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.855491 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-run-ovn\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.855555 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-log-ovn\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.855584 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14f6f887-65c2-4268-b393-c087414272b9-additional-scripts\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.855631 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bc9f\" (UniqueName: \"kubernetes.io/projected/14f6f887-65c2-4268-b393-c087414272b9-kube-api-access-7bc9f\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.855689 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-run\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.855741 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14f6f887-65c2-4268-b393-c087414272b9-scripts\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.859036 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-run\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.859086 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-log-ovn\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.859115 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-run-ovn\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.859754 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14f6f887-65c2-4268-b393-c087414272b9-additional-scripts\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.861784 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14f6f887-65c2-4268-b393-c087414272b9-scripts\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:06 crc kubenswrapper[4731]: I1125 15:13:06.884061 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bc9f\" (UniqueName: \"kubernetes.io/projected/14f6f887-65c2-4268-b393-c087414272b9-kube-api-access-7bc9f\") pod \"ovn-controller-crvzg-config-zxntf\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:07 crc kubenswrapper[4731]: I1125 15:13:07.009947 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:07 crc kubenswrapper[4731]: I1125 15:13:07.770218 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-crvzg-config-zxntf"] Nov 25 15:13:07 crc kubenswrapper[4731]: W1125 15:13:07.853483 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14f6f887_65c2_4268_b393_c087414272b9.slice/crio-1253a43445fe4ba49a566708f870e6e8b1f795184e92bb7ee395fb2e0ef63e4b WatchSource:0}: Error finding container 1253a43445fe4ba49a566708f870e6e8b1f795184e92bb7ee395fb2e0ef63e4b: Status 404 returned error can't find the container with id 1253a43445fe4ba49a566708f870e6e8b1f795184e92bb7ee395fb2e0ef63e4b Nov 25 15:13:08 crc kubenswrapper[4731]: I1125 15:13:08.744685 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e07b944-1135-4cda-ae5f-6c387117d5bd","Type":"ContainerStarted","Data":"14b933a0c1609084b2d1cb1284403895f29f3326db1cd41c135b4cf17c95b3ab"} Nov 25 15:13:08 crc kubenswrapper[4731]: I1125 15:13:08.745021 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e07b944-1135-4cda-ae5f-6c387117d5bd","Type":"ContainerStarted","Data":"2bed24a1f61717bc6ddd0503113bc0bac505abe94899a27c0ed8a720b75b9bcf"} Nov 25 15:13:08 crc kubenswrapper[4731]: I1125 15:13:08.746196 4731 generic.go:334] "Generic (PLEG): container finished" podID="14f6f887-65c2-4268-b393-c087414272b9" containerID="c25029d63b4507ef0dddccf794c9a35c08e26ed37d06ab8a559e3f1a839f16db" exitCode=0 Nov 25 15:13:08 crc kubenswrapper[4731]: I1125 15:13:08.746228 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crvzg-config-zxntf" event={"ID":"14f6f887-65c2-4268-b393-c087414272b9","Type":"ContainerDied","Data":"c25029d63b4507ef0dddccf794c9a35c08e26ed37d06ab8a559e3f1a839f16db"} Nov 25 15:13:08 crc kubenswrapper[4731]: I1125 15:13:08.746243 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crvzg-config-zxntf" event={"ID":"14f6f887-65c2-4268-b393-c087414272b9","Type":"ContainerStarted","Data":"1253a43445fe4ba49a566708f870e6e8b1f795184e92bb7ee395fb2e0ef63e4b"} Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.065114 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.126624 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-run-ovn\") pod \"14f6f887-65c2-4268-b393-c087414272b9\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.126683 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14f6f887-65c2-4268-b393-c087414272b9-additional-scripts\") pod \"14f6f887-65c2-4268-b393-c087414272b9\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.126721 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14f6f887-65c2-4268-b393-c087414272b9-scripts\") pod \"14f6f887-65c2-4268-b393-c087414272b9\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.126755 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-run\") pod \"14f6f887-65c2-4268-b393-c087414272b9\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.126771 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-log-ovn\") pod \"14f6f887-65c2-4268-b393-c087414272b9\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.126804 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bc9f\" (UniqueName: \"kubernetes.io/projected/14f6f887-65c2-4268-b393-c087414272b9-kube-api-access-7bc9f\") pod \"14f6f887-65c2-4268-b393-c087414272b9\" (UID: \"14f6f887-65c2-4268-b393-c087414272b9\") " Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.126811 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "14f6f887-65c2-4268-b393-c087414272b9" (UID: "14f6f887-65c2-4268-b393-c087414272b9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.126937 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "14f6f887-65c2-4268-b393-c087414272b9" (UID: "14f6f887-65c2-4268-b393-c087414272b9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.126982 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-run" (OuterVolumeSpecName: "var-run") pod "14f6f887-65c2-4268-b393-c087414272b9" (UID: "14f6f887-65c2-4268-b393-c087414272b9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.127487 4731 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-run\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.127512 4731 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.127524 4731 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/14f6f887-65c2-4268-b393-c087414272b9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.127619 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f6f887-65c2-4268-b393-c087414272b9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "14f6f887-65c2-4268-b393-c087414272b9" (UID: "14f6f887-65c2-4268-b393-c087414272b9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.127795 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14f6f887-65c2-4268-b393-c087414272b9-scripts" (OuterVolumeSpecName: "scripts") pod "14f6f887-65c2-4268-b393-c087414272b9" (UID: "14f6f887-65c2-4268-b393-c087414272b9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.137187 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f6f887-65c2-4268-b393-c087414272b9-kube-api-access-7bc9f" (OuterVolumeSpecName: "kube-api-access-7bc9f") pod "14f6f887-65c2-4268-b393-c087414272b9" (UID: "14f6f887-65c2-4268-b393-c087414272b9"). InnerVolumeSpecName "kube-api-access-7bc9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.229037 4731 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/14f6f887-65c2-4268-b393-c087414272b9-additional-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.229072 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/14f6f887-65c2-4268-b393-c087414272b9-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.229082 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bc9f\" (UniqueName: \"kubernetes.io/projected/14f6f887-65c2-4268-b393-c087414272b9-kube-api-access-7bc9f\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.769743 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-crvzg-config-zxntf" event={"ID":"14f6f887-65c2-4268-b393-c087414272b9","Type":"ContainerDied","Data":"1253a43445fe4ba49a566708f870e6e8b1f795184e92bb7ee395fb2e0ef63e4b"} Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.769782 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1253a43445fe4ba49a566708f870e6e8b1f795184e92bb7ee395fb2e0ef63e4b" Nov 25 15:13:10 crc kubenswrapper[4731]: I1125 15:13:10.769848 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-crvzg-config-zxntf" Nov 25 15:13:11 crc kubenswrapper[4731]: I1125 15:13:11.153666 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-crvzg-config-zxntf"] Nov 25 15:13:11 crc kubenswrapper[4731]: I1125 15:13:11.166148 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-crvzg-config-zxntf"] Nov 25 15:13:11 crc kubenswrapper[4731]: I1125 15:13:11.783541 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e07b944-1135-4cda-ae5f-6c387117d5bd","Type":"ContainerStarted","Data":"bdbc84d279d0e99774131162d74569428384dcc4a9ef291d2a47ea80566af19c"} Nov 25 15:13:12 crc kubenswrapper[4731]: I1125 15:13:12.792377 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e07b944-1135-4cda-ae5f-6c387117d5bd","Type":"ContainerStarted","Data":"86ead0ffd600fcd29bbe41bbd2da3eebd5a5bddb6946f0ccdc1c047073b71441"} Nov 25 15:13:12 crc kubenswrapper[4731]: I1125 15:13:12.820417 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14f6f887-65c2-4268-b393-c087414272b9" path="/var/lib/kubelet/pods/14f6f887-65c2-4268-b393-c087414272b9/volumes" Nov 25 15:13:13 crc kubenswrapper[4731]: I1125 15:13:13.806031 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e07b944-1135-4cda-ae5f-6c387117d5bd","Type":"ContainerStarted","Data":"eb12370b7f61dd1051f644eed7372a24acce64d39f82991cf103bbe5216fce37"} Nov 25 15:13:14 crc kubenswrapper[4731]: I1125 15:13:14.824639 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e07b944-1135-4cda-ae5f-6c387117d5bd","Type":"ContainerStarted","Data":"aa24439dcbb13b16b0bc621b015c555027b50a9df55aab546285eef58bfbd3c5"} Nov 25 15:13:14 crc kubenswrapper[4731]: I1125 15:13:14.824990 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e07b944-1135-4cda-ae5f-6c387117d5bd","Type":"ContainerStarted","Data":"011ab64ad5564c13247845195448b8a2bedd2b7f7db7a8bc6f14cea08cd5263a"} Nov 25 15:13:14 crc kubenswrapper[4731]: I1125 15:13:14.824999 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e07b944-1135-4cda-ae5f-6c387117d5bd","Type":"ContainerStarted","Data":"e18388851066c27937903252e2fd8c8ae5be304ad06e53adf05db93ac4fa36c5"} Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.082120 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.305958 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.476232 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-c8qvw"] Nov 25 15:13:15 crc kubenswrapper[4731]: E1125 15:13:15.476917 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f6f887-65c2-4268-b393-c087414272b9" containerName="ovn-config" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.476939 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f6f887-65c2-4268-b393-c087414272b9" containerName="ovn-config" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.477272 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f6f887-65c2-4268-b393-c087414272b9" containerName="ovn-config" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.477994 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-c8qvw" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.482163 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-c8qvw"] Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.562066 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-v6khp"] Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.563393 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v6khp" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.573930 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-v6khp"] Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.591920 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-075e-account-create-lp2p6"] Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.593037 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-075e-account-create-lp2p6" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.598442 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-075e-account-create-lp2p6"] Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.602724 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.624165 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e04865-a9b1-495f-b0c3-0042b35c33c2-operator-scripts\") pod \"cinder-db-create-c8qvw\" (UID: \"26e04865-a9b1-495f-b0c3-0042b35c33c2\") " pod="openstack/cinder-db-create-c8qvw" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.624247 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nctss\" (UniqueName: \"kubernetes.io/projected/26e04865-a9b1-495f-b0c3-0042b35c33c2-kube-api-access-nctss\") pod \"cinder-db-create-c8qvw\" (UID: \"26e04865-a9b1-495f-b0c3-0042b35c33c2\") " pod="openstack/cinder-db-create-c8qvw" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.725523 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b21bc434-c659-4153-ad06-7eb075fae022-operator-scripts\") pod \"cinder-075e-account-create-lp2p6\" (UID: \"b21bc434-c659-4153-ad06-7eb075fae022\") " pod="openstack/cinder-075e-account-create-lp2p6" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.725595 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e04865-a9b1-495f-b0c3-0042b35c33c2-operator-scripts\") pod \"cinder-db-create-c8qvw\" (UID: \"26e04865-a9b1-495f-b0c3-0042b35c33c2\") " pod="openstack/cinder-db-create-c8qvw" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.725685 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvgrx\" (UniqueName: \"kubernetes.io/projected/b21bc434-c659-4153-ad06-7eb075fae022-kube-api-access-gvgrx\") pod \"cinder-075e-account-create-lp2p6\" (UID: \"b21bc434-c659-4153-ad06-7eb075fae022\") " pod="openstack/cinder-075e-account-create-lp2p6" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.725720 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj78p\" (UniqueName: \"kubernetes.io/projected/057571f2-061b-4efa-8faf-3949fbae1889-kube-api-access-nj78p\") pod \"barbican-db-create-v6khp\" (UID: \"057571f2-061b-4efa-8faf-3949fbae1889\") " pod="openstack/barbican-db-create-v6khp" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.725755 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nctss\" (UniqueName: \"kubernetes.io/projected/26e04865-a9b1-495f-b0c3-0042b35c33c2-kube-api-access-nctss\") pod \"cinder-db-create-c8qvw\" (UID: \"26e04865-a9b1-495f-b0c3-0042b35c33c2\") " pod="openstack/cinder-db-create-c8qvw" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.725777 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/057571f2-061b-4efa-8faf-3949fbae1889-operator-scripts\") pod \"barbican-db-create-v6khp\" (UID: \"057571f2-061b-4efa-8faf-3949fbae1889\") " pod="openstack/barbican-db-create-v6khp" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.726531 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e04865-a9b1-495f-b0c3-0042b35c33c2-operator-scripts\") pod \"cinder-db-create-c8qvw\" (UID: \"26e04865-a9b1-495f-b0c3-0042b35c33c2\") " pod="openstack/cinder-db-create-c8qvw" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.744836 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nctss\" (UniqueName: \"kubernetes.io/projected/26e04865-a9b1-495f-b0c3-0042b35c33c2-kube-api-access-nctss\") pod \"cinder-db-create-c8qvw\" (UID: \"26e04865-a9b1-495f-b0c3-0042b35c33c2\") " pod="openstack/cinder-db-create-c8qvw" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.808371 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-c8qvw" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.827785 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b21bc434-c659-4153-ad06-7eb075fae022-operator-scripts\") pod \"cinder-075e-account-create-lp2p6\" (UID: \"b21bc434-c659-4153-ad06-7eb075fae022\") " pod="openstack/cinder-075e-account-create-lp2p6" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.828085 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvgrx\" (UniqueName: \"kubernetes.io/projected/b21bc434-c659-4153-ad06-7eb075fae022-kube-api-access-gvgrx\") pod \"cinder-075e-account-create-lp2p6\" (UID: \"b21bc434-c659-4153-ad06-7eb075fae022\") " pod="openstack/cinder-075e-account-create-lp2p6" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.828149 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj78p\" (UniqueName: \"kubernetes.io/projected/057571f2-061b-4efa-8faf-3949fbae1889-kube-api-access-nj78p\") pod \"barbican-db-create-v6khp\" (UID: \"057571f2-061b-4efa-8faf-3949fbae1889\") " pod="openstack/barbican-db-create-v6khp" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.828210 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/057571f2-061b-4efa-8faf-3949fbae1889-operator-scripts\") pod \"barbican-db-create-v6khp\" (UID: \"057571f2-061b-4efa-8faf-3949fbae1889\") " pod="openstack/barbican-db-create-v6khp" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.828832 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/057571f2-061b-4efa-8faf-3949fbae1889-operator-scripts\") pod \"barbican-db-create-v6khp\" (UID: \"057571f2-061b-4efa-8faf-3949fbae1889\") " pod="openstack/barbican-db-create-v6khp" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.828846 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b21bc434-c659-4153-ad06-7eb075fae022-operator-scripts\") pod \"cinder-075e-account-create-lp2p6\" (UID: \"b21bc434-c659-4153-ad06-7eb075fae022\") " pod="openstack/cinder-075e-account-create-lp2p6" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.860378 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvgrx\" (UniqueName: \"kubernetes.io/projected/b21bc434-c659-4153-ad06-7eb075fae022-kube-api-access-gvgrx\") pod \"cinder-075e-account-create-lp2p6\" (UID: \"b21bc434-c659-4153-ad06-7eb075fae022\") " pod="openstack/cinder-075e-account-create-lp2p6" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.886893 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-trbrn"] Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.888150 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-trbrn" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.896808 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj78p\" (UniqueName: \"kubernetes.io/projected/057571f2-061b-4efa-8faf-3949fbae1889-kube-api-access-nj78p\") pod \"barbican-db-create-v6khp\" (UID: \"057571f2-061b-4efa-8faf-3949fbae1889\") " pod="openstack/barbican-db-create-v6khp" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.897153 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v6khp" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.905720 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-7mzbq"] Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.906731 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-7mzbq" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.907241 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4547k" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.907480 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.907593 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.907726 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.923024 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-075e-account-create-lp2p6" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.974888 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-13ae-account-create-49z99"] Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.975999 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-13ae-account-create-49z99" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.980988 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Nov 25 15:13:15 crc kubenswrapper[4731]: I1125 15:13:15.986110 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-trbrn"] Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.016726 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-7mzbq"] Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.029650 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-13ae-account-create-49z99"] Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.030605 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzn2c\" (UniqueName: \"kubernetes.io/projected/f19edf47-a4ad-4705-b1e7-b4316db0ac5f-kube-api-access-qzn2c\") pod \"heat-db-create-7mzbq\" (UID: \"f19edf47-a4ad-4705-b1e7-b4316db0ac5f\") " pod="openstack/heat-db-create-7mzbq" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.030643 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-combined-ca-bundle\") pod \"keystone-db-sync-trbrn\" (UID: \"c59b5888-0ce3-4fa0-8e74-6ea06ff91390\") " pod="openstack/keystone-db-sync-trbrn" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.030689 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh87g\" (UniqueName: \"kubernetes.io/projected/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-kube-api-access-mh87g\") pod \"keystone-db-sync-trbrn\" (UID: \"c59b5888-0ce3-4fa0-8e74-6ea06ff91390\") " pod="openstack/keystone-db-sync-trbrn" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.030769 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-config-data\") pod \"keystone-db-sync-trbrn\" (UID: \"c59b5888-0ce3-4fa0-8e74-6ea06ff91390\") " pod="openstack/keystone-db-sync-trbrn" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.030800 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f19edf47-a4ad-4705-b1e7-b4316db0ac5f-operator-scripts\") pod \"heat-db-create-7mzbq\" (UID: \"f19edf47-a4ad-4705-b1e7-b4316db0ac5f\") " pod="openstack/heat-db-create-7mzbq" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.072880 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-s2pj9"] Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.074026 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s2pj9" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.091260 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-s2pj9"] Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.118950 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-4a1e-account-create-znsxk"] Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.120103 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4a1e-account-create-znsxk" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.122927 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.133667 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-config-data\") pod \"keystone-db-sync-trbrn\" (UID: \"c59b5888-0ce3-4fa0-8e74-6ea06ff91390\") " pod="openstack/keystone-db-sync-trbrn" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.133730 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f19edf47-a4ad-4705-b1e7-b4316db0ac5f-operator-scripts\") pod \"heat-db-create-7mzbq\" (UID: \"f19edf47-a4ad-4705-b1e7-b4316db0ac5f\") " pod="openstack/heat-db-create-7mzbq" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.133770 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzn2c\" (UniqueName: \"kubernetes.io/projected/f19edf47-a4ad-4705-b1e7-b4316db0ac5f-kube-api-access-qzn2c\") pod \"heat-db-create-7mzbq\" (UID: \"f19edf47-a4ad-4705-b1e7-b4316db0ac5f\") " pod="openstack/heat-db-create-7mzbq" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.133788 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-combined-ca-bundle\") pod \"keystone-db-sync-trbrn\" (UID: \"c59b5888-0ce3-4fa0-8e74-6ea06ff91390\") " pod="openstack/keystone-db-sync-trbrn" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.133835 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh87g\" (UniqueName: \"kubernetes.io/projected/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-kube-api-access-mh87g\") pod \"keystone-db-sync-trbrn\" (UID: \"c59b5888-0ce3-4fa0-8e74-6ea06ff91390\") " pod="openstack/keystone-db-sync-trbrn" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.133881 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f30f9bba-77f8-4ec4-8a2c-0136424c403a-operator-scripts\") pod \"heat-13ae-account-create-49z99\" (UID: \"f30f9bba-77f8-4ec4-8a2c-0136424c403a\") " pod="openstack/heat-13ae-account-create-49z99" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.133918 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npmmc\" (UniqueName: \"kubernetes.io/projected/f30f9bba-77f8-4ec4-8a2c-0136424c403a-kube-api-access-npmmc\") pod \"heat-13ae-account-create-49z99\" (UID: \"f30f9bba-77f8-4ec4-8a2c-0136424c403a\") " pod="openstack/heat-13ae-account-create-49z99" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.137581 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f19edf47-a4ad-4705-b1e7-b4316db0ac5f-operator-scripts\") pod \"heat-db-create-7mzbq\" (UID: \"f19edf47-a4ad-4705-b1e7-b4316db0ac5f\") " pod="openstack/heat-db-create-7mzbq" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.152713 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4a1e-account-create-znsxk"] Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.153622 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-combined-ca-bundle\") pod \"keystone-db-sync-trbrn\" (UID: \"c59b5888-0ce3-4fa0-8e74-6ea06ff91390\") " pod="openstack/keystone-db-sync-trbrn" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.167578 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzn2c\" (UniqueName: \"kubernetes.io/projected/f19edf47-a4ad-4705-b1e7-b4316db0ac5f-kube-api-access-qzn2c\") pod \"heat-db-create-7mzbq\" (UID: \"f19edf47-a4ad-4705-b1e7-b4316db0ac5f\") " pod="openstack/heat-db-create-7mzbq" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.168095 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-config-data\") pod \"keystone-db-sync-trbrn\" (UID: \"c59b5888-0ce3-4fa0-8e74-6ea06ff91390\") " pod="openstack/keystone-db-sync-trbrn" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.173878 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh87g\" (UniqueName: \"kubernetes.io/projected/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-kube-api-access-mh87g\") pod \"keystone-db-sync-trbrn\" (UID: \"c59b5888-0ce3-4fa0-8e74-6ea06ff91390\") " pod="openstack/keystone-db-sync-trbrn" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.235640 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f30f9bba-77f8-4ec4-8a2c-0136424c403a-operator-scripts\") pod \"heat-13ae-account-create-49z99\" (UID: \"f30f9bba-77f8-4ec4-8a2c-0136424c403a\") " pod="openstack/heat-13ae-account-create-49z99" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.235692 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s995k\" (UniqueName: \"kubernetes.io/projected/8f41eaec-9d8e-4a4a-bed0-2a225ace9e75-kube-api-access-s995k\") pod \"barbican-4a1e-account-create-znsxk\" (UID: \"8f41eaec-9d8e-4a4a-bed0-2a225ace9e75\") " pod="openstack/barbican-4a1e-account-create-znsxk" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.235715 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npmmc\" (UniqueName: \"kubernetes.io/projected/f30f9bba-77f8-4ec4-8a2c-0136424c403a-kube-api-access-npmmc\") pod \"heat-13ae-account-create-49z99\" (UID: \"f30f9bba-77f8-4ec4-8a2c-0136424c403a\") " pod="openstack/heat-13ae-account-create-49z99" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.235736 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdeef619-da62-4c86-949e-1b795bdc5de3-operator-scripts\") pod \"neutron-db-create-s2pj9\" (UID: \"fdeef619-da62-4c86-949e-1b795bdc5de3\") " pod="openstack/neutron-db-create-s2pj9" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.235754 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f41eaec-9d8e-4a4a-bed0-2a225ace9e75-operator-scripts\") pod \"barbican-4a1e-account-create-znsxk\" (UID: \"8f41eaec-9d8e-4a4a-bed0-2a225ace9e75\") " pod="openstack/barbican-4a1e-account-create-znsxk" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.235794 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwlkb\" (UniqueName: \"kubernetes.io/projected/fdeef619-da62-4c86-949e-1b795bdc5de3-kube-api-access-vwlkb\") pod \"neutron-db-create-s2pj9\" (UID: \"fdeef619-da62-4c86-949e-1b795bdc5de3\") " pod="openstack/neutron-db-create-s2pj9" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.237219 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f30f9bba-77f8-4ec4-8a2c-0136424c403a-operator-scripts\") pod \"heat-13ae-account-create-49z99\" (UID: \"f30f9bba-77f8-4ec4-8a2c-0136424c403a\") " pod="openstack/heat-13ae-account-create-49z99" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.242642 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-trbrn" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.253242 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-7mzbq" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.254885 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npmmc\" (UniqueName: \"kubernetes.io/projected/f30f9bba-77f8-4ec4-8a2c-0136424c403a-kube-api-access-npmmc\") pod \"heat-13ae-account-create-49z99\" (UID: \"f30f9bba-77f8-4ec4-8a2c-0136424c403a\") " pod="openstack/heat-13ae-account-create-49z99" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.297767 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-13ae-account-create-49z99" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.341596 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwlkb\" (UniqueName: \"kubernetes.io/projected/fdeef619-da62-4c86-949e-1b795bdc5de3-kube-api-access-vwlkb\") pod \"neutron-db-create-s2pj9\" (UID: \"fdeef619-da62-4c86-949e-1b795bdc5de3\") " pod="openstack/neutron-db-create-s2pj9" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.341968 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s995k\" (UniqueName: \"kubernetes.io/projected/8f41eaec-9d8e-4a4a-bed0-2a225ace9e75-kube-api-access-s995k\") pod \"barbican-4a1e-account-create-znsxk\" (UID: \"8f41eaec-9d8e-4a4a-bed0-2a225ace9e75\") " pod="openstack/barbican-4a1e-account-create-znsxk" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.342075 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdeef619-da62-4c86-949e-1b795bdc5de3-operator-scripts\") pod \"neutron-db-create-s2pj9\" (UID: \"fdeef619-da62-4c86-949e-1b795bdc5de3\") " pod="openstack/neutron-db-create-s2pj9" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.342165 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f41eaec-9d8e-4a4a-bed0-2a225ace9e75-operator-scripts\") pod \"barbican-4a1e-account-create-znsxk\" (UID: \"8f41eaec-9d8e-4a4a-bed0-2a225ace9e75\") " pod="openstack/barbican-4a1e-account-create-znsxk" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.342521 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-4917-account-create-kgths"] Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.343107 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdeef619-da62-4c86-949e-1b795bdc5de3-operator-scripts\") pod \"neutron-db-create-s2pj9\" (UID: \"fdeef619-da62-4c86-949e-1b795bdc5de3\") " pod="openstack/neutron-db-create-s2pj9" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.343471 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f41eaec-9d8e-4a4a-bed0-2a225ace9e75-operator-scripts\") pod \"barbican-4a1e-account-create-znsxk\" (UID: \"8f41eaec-9d8e-4a4a-bed0-2a225ace9e75\") " pod="openstack/barbican-4a1e-account-create-znsxk" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.343717 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4917-account-create-kgths" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.345081 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.363967 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s995k\" (UniqueName: \"kubernetes.io/projected/8f41eaec-9d8e-4a4a-bed0-2a225ace9e75-kube-api-access-s995k\") pod \"barbican-4a1e-account-create-znsxk\" (UID: \"8f41eaec-9d8e-4a4a-bed0-2a225ace9e75\") " pod="openstack/barbican-4a1e-account-create-znsxk" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.364790 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwlkb\" (UniqueName: \"kubernetes.io/projected/fdeef619-da62-4c86-949e-1b795bdc5de3-kube-api-access-vwlkb\") pod \"neutron-db-create-s2pj9\" (UID: \"fdeef619-da62-4c86-949e-1b795bdc5de3\") " pod="openstack/neutron-db-create-s2pj9" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.372710 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4917-account-create-kgths"] Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.427265 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s2pj9" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.443755 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfk6c\" (UniqueName: \"kubernetes.io/projected/a791aaf4-bccc-458a-9072-a5f3e44a6ea6-kube-api-access-dfk6c\") pod \"neutron-4917-account-create-kgths\" (UID: \"a791aaf4-bccc-458a-9072-a5f3e44a6ea6\") " pod="openstack/neutron-4917-account-create-kgths" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.443811 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a791aaf4-bccc-458a-9072-a5f3e44a6ea6-operator-scripts\") pod \"neutron-4917-account-create-kgths\" (UID: \"a791aaf4-bccc-458a-9072-a5f3e44a6ea6\") " pod="openstack/neutron-4917-account-create-kgths" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.536242 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4a1e-account-create-znsxk" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.545737 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfk6c\" (UniqueName: \"kubernetes.io/projected/a791aaf4-bccc-458a-9072-a5f3e44a6ea6-kube-api-access-dfk6c\") pod \"neutron-4917-account-create-kgths\" (UID: \"a791aaf4-bccc-458a-9072-a5f3e44a6ea6\") " pod="openstack/neutron-4917-account-create-kgths" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.545796 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a791aaf4-bccc-458a-9072-a5f3e44a6ea6-operator-scripts\") pod \"neutron-4917-account-create-kgths\" (UID: \"a791aaf4-bccc-458a-9072-a5f3e44a6ea6\") " pod="openstack/neutron-4917-account-create-kgths" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.546489 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a791aaf4-bccc-458a-9072-a5f3e44a6ea6-operator-scripts\") pod \"neutron-4917-account-create-kgths\" (UID: \"a791aaf4-bccc-458a-9072-a5f3e44a6ea6\") " pod="openstack/neutron-4917-account-create-kgths" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.566727 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfk6c\" (UniqueName: \"kubernetes.io/projected/a791aaf4-bccc-458a-9072-a5f3e44a6ea6-kube-api-access-dfk6c\") pod \"neutron-4917-account-create-kgths\" (UID: \"a791aaf4-bccc-458a-9072-a5f3e44a6ea6\") " pod="openstack/neutron-4917-account-create-kgths" Nov 25 15:13:16 crc kubenswrapper[4731]: I1125 15:13:16.660281 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4917-account-create-kgths" Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.174404 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-v6khp"] Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.446738 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-s2pj9"] Nov 25 15:13:17 crc kubenswrapper[4731]: W1125 15:13:17.453482 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfdeef619_da62_4c86_949e_1b795bdc5de3.slice/crio-5015cc71ac16355bb4b28efdef948349b402c9c3e8f1c57b5476285d26078b8e WatchSource:0}: Error finding container 5015cc71ac16355bb4b28efdef948349b402c9c3e8f1c57b5476285d26078b8e: Status 404 returned error can't find the container with id 5015cc71ac16355bb4b28efdef948349b402c9c3e8f1c57b5476285d26078b8e Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.552331 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-13ae-account-create-49z99"] Nov 25 15:13:17 crc kubenswrapper[4731]: W1125 15:13:17.564682 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf30f9bba_77f8_4ec4_8a2c_0136424c403a.slice/crio-4bd5daa1dc747466743ebf96935014d838048e1c87e630ad8dc0cc5b4d1befbd WatchSource:0}: Error finding container 4bd5daa1dc747466743ebf96935014d838048e1c87e630ad8dc0cc5b4d1befbd: Status 404 returned error can't find the container with id 4bd5daa1dc747466743ebf96935014d838048e1c87e630ad8dc0cc5b4d1befbd Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.574557 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-7mzbq"] Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.593170 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-trbrn"] Nov 25 15:13:17 crc kubenswrapper[4731]: W1125 15:13:17.595103 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc59b5888_0ce3_4fa0_8e74_6ea06ff91390.slice/crio-d9592fd43ad11b9e221cca3fb3eb09c37f21f8a2b2b4ec32499cac5eef340aae WatchSource:0}: Error finding container d9592fd43ad11b9e221cca3fb3eb09c37f21f8a2b2b4ec32499cac5eef340aae: Status 404 returned error can't find the container with id d9592fd43ad11b9e221cca3fb3eb09c37f21f8a2b2b4ec32499cac5eef340aae Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.603844 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-075e-account-create-lp2p6"] Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.680514 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-4a1e-account-create-znsxk"] Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.705067 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-c8qvw"] Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.721625 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-4917-account-create-kgths"] Nov 25 15:13:17 crc kubenswrapper[4731]: W1125 15:13:17.743222 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda791aaf4_bccc_458a_9072_a5f3e44a6ea6.slice/crio-ef5c59b7c25bd06087f6be934c16bcca6687c6df6b83ce72178b9bc5d742ebd4 WatchSource:0}: Error finding container ef5c59b7c25bd06087f6be934c16bcca6687c6df6b83ce72178b9bc5d742ebd4: Status 404 returned error can't find the container with id ef5c59b7c25bd06087f6be934c16bcca6687c6df6b83ce72178b9bc5d742ebd4 Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.875667 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-trbrn" event={"ID":"c59b5888-0ce3-4fa0-8e74-6ea06ff91390","Type":"ContainerStarted","Data":"d9592fd43ad11b9e221cca3fb3eb09c37f21f8a2b2b4ec32499cac5eef340aae"} Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.914766 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-075e-account-create-lp2p6" event={"ID":"b21bc434-c659-4153-ad06-7eb075fae022","Type":"ContainerStarted","Data":"6257169603ee55d319efa59b186644290501a0abc6b16fd656d320e87310aa8f"} Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.916368 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-c8qvw" event={"ID":"26e04865-a9b1-495f-b0c3-0042b35c33c2","Type":"ContainerStarted","Data":"5e6989948f1c296190a3358a6901f4fe0047a171f26efe11159719f792150b18"} Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.950526 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e07b944-1135-4cda-ae5f-6c387117d5bd","Type":"ContainerStarted","Data":"7344185b2604cbeb587ae1ce35fb50c8ec71d8e0d59b27909c6d3be6bbd6cfe9"} Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.950580 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e07b944-1135-4cda-ae5f-6c387117d5bd","Type":"ContainerStarted","Data":"c1e0735e7c6e49ba957166d7dc789f6ff71857f3f9a290d73f7a187fe5c44c14"} Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.950592 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e07b944-1135-4cda-ae5f-6c387117d5bd","Type":"ContainerStarted","Data":"4bb5e28734c31adf2944b4746cd0098fa782ac20f2022fb12eaeb5b27ef90681"} Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.954306 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4917-account-create-kgths" event={"ID":"a791aaf4-bccc-458a-9072-a5f3e44a6ea6","Type":"ContainerStarted","Data":"ef5c59b7c25bd06087f6be934c16bcca6687c6df6b83ce72178b9bc5d742ebd4"} Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.956284 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4a1e-account-create-znsxk" event={"ID":"8f41eaec-9d8e-4a4a-bed0-2a225ace9e75","Type":"ContainerStarted","Data":"9eec612a47689235283801be354feea94fcd5ae48db797f6c9c08734d5e65927"} Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.958022 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-13ae-account-create-49z99" event={"ID":"f30f9bba-77f8-4ec4-8a2c-0136424c403a","Type":"ContainerStarted","Data":"4bd5daa1dc747466743ebf96935014d838048e1c87e630ad8dc0cc5b4d1befbd"} Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.960065 4731 generic.go:334] "Generic (PLEG): container finished" podID="057571f2-061b-4efa-8faf-3949fbae1889" containerID="7e4ec7297ba9d0806edbfc9a236510f8dec15b38d5ca5bd26e79192d0fe10a93" exitCode=0 Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.960126 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v6khp" event={"ID":"057571f2-061b-4efa-8faf-3949fbae1889","Type":"ContainerDied","Data":"7e4ec7297ba9d0806edbfc9a236510f8dec15b38d5ca5bd26e79192d0fe10a93"} Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.960155 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v6khp" event={"ID":"057571f2-061b-4efa-8faf-3949fbae1889","Type":"ContainerStarted","Data":"33926bcb28a50c6865455f9e8fd81924999d89fa111028da1d1af250ee4ae9fa"} Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.962969 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-7mzbq" event={"ID":"f19edf47-a4ad-4705-b1e7-b4316db0ac5f","Type":"ContainerStarted","Data":"50a8a3098d5aa8c0e9bce627c9c02b600767f1f826bdbb16bc9afad77cdc4d06"} Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.964784 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s2pj9" event={"ID":"fdeef619-da62-4c86-949e-1b795bdc5de3","Type":"ContainerStarted","Data":"6444728cfd18234fbcce00ceadcac4fff9a97f1e22b72cde4ebcdb152a3578c7"} Nov 25 15:13:17 crc kubenswrapper[4731]: I1125 15:13:17.964812 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s2pj9" event={"ID":"fdeef619-da62-4c86-949e-1b795bdc5de3","Type":"ContainerStarted","Data":"5015cc71ac16355bb4b28efdef948349b402c9c3e8f1c57b5476285d26078b8e"} Nov 25 15:13:18 crc kubenswrapper[4731]: I1125 15:13:18.007373 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-s2pj9" podStartSLOduration=2.0073497 podStartE2EDuration="2.0073497s" podCreationTimestamp="2025-11-25 15:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:13:17.989246377 +0000 UTC m=+1156.736380807" watchObservedRunningTime="2025-11-25 15:13:18.0073497 +0000 UTC m=+1156.754484130" Nov 25 15:13:18 crc kubenswrapper[4731]: I1125 15:13:18.976216 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-c8qvw" event={"ID":"26e04865-a9b1-495f-b0c3-0042b35c33c2","Type":"ContainerStarted","Data":"0359022f3196b281024530f3ed45d5e1e80df84f485f1ee6a9d222a6c2eb1b1a"} Nov 25 15:13:18 crc kubenswrapper[4731]: I1125 15:13:18.980437 4731 generic.go:334] "Generic (PLEG): container finished" podID="f19edf47-a4ad-4705-b1e7-b4316db0ac5f" containerID="7138424c8158608f0ad55d8af270a01abfb9157a812ea9a061509ea8994f9d32" exitCode=0 Nov 25 15:13:18 crc kubenswrapper[4731]: I1125 15:13:18.980492 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-7mzbq" event={"ID":"f19edf47-a4ad-4705-b1e7-b4316db0ac5f","Type":"ContainerDied","Data":"7138424c8158608f0ad55d8af270a01abfb9157a812ea9a061509ea8994f9d32"} Nov 25 15:13:18 crc kubenswrapper[4731]: I1125 15:13:18.988319 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e07b944-1135-4cda-ae5f-6c387117d5bd","Type":"ContainerStarted","Data":"d5a3d918849f73a79a5bcf54041bd40d6031f50fd51392f8a7b93a9206f301e5"} Nov 25 15:13:18 crc kubenswrapper[4731]: I1125 15:13:18.988370 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e07b944-1135-4cda-ae5f-6c387117d5bd","Type":"ContainerStarted","Data":"10f63bab275e4d377b15ae78e3c430e411a35afabf9a7a50e8f54171909c296e"} Nov 25 15:13:18 crc kubenswrapper[4731]: I1125 15:13:18.991388 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4917-account-create-kgths" event={"ID":"a791aaf4-bccc-458a-9072-a5f3e44a6ea6","Type":"ContainerStarted","Data":"8492cc2ca277d0639fd265588a1f21d5a50a15adca8def590798fc3ca8cde89b"} Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.005694 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-c8qvw" podStartSLOduration=4.005674235 podStartE2EDuration="4.005674235s" podCreationTimestamp="2025-11-25 15:13:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:13:19.001128481 +0000 UTC m=+1157.748262941" watchObservedRunningTime="2025-11-25 15:13:19.005674235 +0000 UTC m=+1157.752808665" Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.009543 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4a1e-account-create-znsxk" event={"ID":"8f41eaec-9d8e-4a4a-bed0-2a225ace9e75","Type":"ContainerStarted","Data":"667b6e739a40b5475d4d29d25760a17a0c925f01422df31b55f21dff8d11f1bf"} Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.013558 4731 generic.go:334] "Generic (PLEG): container finished" podID="f30f9bba-77f8-4ec4-8a2c-0136424c403a" containerID="be7483707aa511ff55b4bf374a66218106b1ad995cc1ddad1f14f40b4bf63632" exitCode=0 Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.013626 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-13ae-account-create-49z99" event={"ID":"f30f9bba-77f8-4ec4-8a2c-0136424c403a","Type":"ContainerDied","Data":"be7483707aa511ff55b4bf374a66218106b1ad995cc1ddad1f14f40b4bf63632"} Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.018091 4731 generic.go:334] "Generic (PLEG): container finished" podID="fdeef619-da62-4c86-949e-1b795bdc5de3" containerID="6444728cfd18234fbcce00ceadcac4fff9a97f1e22b72cde4ebcdb152a3578c7" exitCode=0 Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.018147 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s2pj9" event={"ID":"fdeef619-da62-4c86-949e-1b795bdc5de3","Type":"ContainerDied","Data":"6444728cfd18234fbcce00ceadcac4fff9a97f1e22b72cde4ebcdb152a3578c7"} Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.020806 4731 generic.go:334] "Generic (PLEG): container finished" podID="b21bc434-c659-4153-ad06-7eb075fae022" containerID="21dff3116e065585b7d94d9a6e85269ff40452a7e8651234cc72dcffe2f88453" exitCode=0 Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.021089 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-075e-account-create-lp2p6" event={"ID":"b21bc434-c659-4153-ad06-7eb075fae022","Type":"ContainerDied","Data":"21dff3116e065585b7d94d9a6e85269ff40452a7e8651234cc72dcffe2f88453"} Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.027467 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-4917-account-create-kgths" podStartSLOduration=3.02744624 podStartE2EDuration="3.02744624s" podCreationTimestamp="2025-11-25 15:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:13:19.025765718 +0000 UTC m=+1157.772900138" watchObservedRunningTime="2025-11-25 15:13:19.02744624 +0000 UTC m=+1157.774580670" Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.072336 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-4a1e-account-create-znsxk" podStartSLOduration=3.072310905 podStartE2EDuration="3.072310905s" podCreationTimestamp="2025-11-25 15:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:13:19.059273098 +0000 UTC m=+1157.806407528" watchObservedRunningTime="2025-11-25 15:13:19.072310905 +0000 UTC m=+1157.819445345" Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.436209 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v6khp" Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.538466 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/057571f2-061b-4efa-8faf-3949fbae1889-operator-scripts\") pod \"057571f2-061b-4efa-8faf-3949fbae1889\" (UID: \"057571f2-061b-4efa-8faf-3949fbae1889\") " Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.538545 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj78p\" (UniqueName: \"kubernetes.io/projected/057571f2-061b-4efa-8faf-3949fbae1889-kube-api-access-nj78p\") pod \"057571f2-061b-4efa-8faf-3949fbae1889\" (UID: \"057571f2-061b-4efa-8faf-3949fbae1889\") " Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.539703 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/057571f2-061b-4efa-8faf-3949fbae1889-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "057571f2-061b-4efa-8faf-3949fbae1889" (UID: "057571f2-061b-4efa-8faf-3949fbae1889"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.569841 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/057571f2-061b-4efa-8faf-3949fbae1889-kube-api-access-nj78p" (OuterVolumeSpecName: "kube-api-access-nj78p") pod "057571f2-061b-4efa-8faf-3949fbae1889" (UID: "057571f2-061b-4efa-8faf-3949fbae1889"). InnerVolumeSpecName "kube-api-access-nj78p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.640792 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/057571f2-061b-4efa-8faf-3949fbae1889-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:19 crc kubenswrapper[4731]: I1125 15:13:19.641086 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj78p\" (UniqueName: \"kubernetes.io/projected/057571f2-061b-4efa-8faf-3949fbae1889-kube-api-access-nj78p\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.031253 4731 generic.go:334] "Generic (PLEG): container finished" podID="26e04865-a9b1-495f-b0c3-0042b35c33c2" containerID="0359022f3196b281024530f3ed45d5e1e80df84f485f1ee6a9d222a6c2eb1b1a" exitCode=0 Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.031340 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-c8qvw" event={"ID":"26e04865-a9b1-495f-b0c3-0042b35c33c2","Type":"ContainerDied","Data":"0359022f3196b281024530f3ed45d5e1e80df84f485f1ee6a9d222a6c2eb1b1a"} Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.040189 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.040231 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.042791 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e07b944-1135-4cda-ae5f-6c387117d5bd","Type":"ContainerStarted","Data":"0e03a3eac7735260235a04dab63fd32148a19db386052f1127b9bf611d13736d"} Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.042849 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"7e07b944-1135-4cda-ae5f-6c387117d5bd","Type":"ContainerStarted","Data":"29463db6d94ede8db524f5c4fa84f4c2c9786ef0fd028c2e511fee166915a007"} Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.045068 4731 generic.go:334] "Generic (PLEG): container finished" podID="a791aaf4-bccc-458a-9072-a5f3e44a6ea6" containerID="8492cc2ca277d0639fd265588a1f21d5a50a15adca8def590798fc3ca8cde89b" exitCode=0 Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.045192 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4917-account-create-kgths" event={"ID":"a791aaf4-bccc-458a-9072-a5f3e44a6ea6","Type":"ContainerDied","Data":"8492cc2ca277d0639fd265588a1f21d5a50a15adca8def590798fc3ca8cde89b"} Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.052732 4731 generic.go:334] "Generic (PLEG): container finished" podID="8f41eaec-9d8e-4a4a-bed0-2a225ace9e75" containerID="667b6e739a40b5475d4d29d25760a17a0c925f01422df31b55f21dff8d11f1bf" exitCode=0 Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.052802 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4a1e-account-create-znsxk" event={"ID":"8f41eaec-9d8e-4a4a-bed0-2a225ace9e75","Type":"ContainerDied","Data":"667b6e739a40b5475d4d29d25760a17a0c925f01422df31b55f21dff8d11f1bf"} Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.056395 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-v6khp" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.057440 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-v6khp" event={"ID":"057571f2-061b-4efa-8faf-3949fbae1889","Type":"ContainerDied","Data":"33926bcb28a50c6865455f9e8fd81924999d89fa111028da1d1af250ee4ae9fa"} Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.057474 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33926bcb28a50c6865455f9e8fd81924999d89fa111028da1d1af250ee4ae9fa" Nov 25 15:13:20 crc kubenswrapper[4731]: E1125 15:13:20.111972 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod057571f2_061b_4efa_8faf_3949fbae1889.slice/crio-33926bcb28a50c6865455f9e8fd81924999d89fa111028da1d1af250ee4ae9fa\": RecentStats: unable to find data in memory cache]" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.139755 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.700144255 podStartE2EDuration="50.13972844s" podCreationTimestamp="2025-11-25 15:12:30 +0000 UTC" firstStartedPulling="2025-11-25 15:13:05.344282997 +0000 UTC m=+1144.091417457" lastFinishedPulling="2025-11-25 15:13:16.783867212 +0000 UTC m=+1155.531001642" observedRunningTime="2025-11-25 15:13:20.104879837 +0000 UTC m=+1158.852014287" watchObservedRunningTime="2025-11-25 15:13:20.13972844 +0000 UTC m=+1158.886862870" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.394686 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zxg8k"] Nov 25 15:13:20 crc kubenswrapper[4731]: E1125 15:13:20.395418 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="057571f2-061b-4efa-8faf-3949fbae1889" containerName="mariadb-database-create" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.395438 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="057571f2-061b-4efa-8faf-3949fbae1889" containerName="mariadb-database-create" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.395666 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="057571f2-061b-4efa-8faf-3949fbae1889" containerName="mariadb-database-create" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.396721 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.399393 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.463409 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjbmg\" (UniqueName: \"kubernetes.io/projected/d1e157c6-3f51-4df4-b6d8-b6a362266668-kube-api-access-fjbmg\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.463479 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.463505 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-config\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.463577 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.463611 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.463630 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.478858 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zxg8k"] Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.565182 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjbmg\" (UniqueName: \"kubernetes.io/projected/d1e157c6-3f51-4df4-b6d8-b6a362266668-kube-api-access-fjbmg\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.565246 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.565277 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-config\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.565365 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.565700 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.565728 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.566581 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.566613 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.566628 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.567252 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.572353 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-config\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.605929 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjbmg\" (UniqueName: \"kubernetes.io/projected/d1e157c6-3f51-4df4-b6d8-b6a362266668-kube-api-access-fjbmg\") pod \"dnsmasq-dns-5c79d794d7-zxg8k\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.686161 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.696612 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-13ae-account-create-49z99" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.698139 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-075e-account-create-lp2p6" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.721431 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-7mzbq" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.734279 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s2pj9" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.768988 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdeef619-da62-4c86-949e-1b795bdc5de3-operator-scripts\") pod \"fdeef619-da62-4c86-949e-1b795bdc5de3\" (UID: \"fdeef619-da62-4c86-949e-1b795bdc5de3\") " Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.769062 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwlkb\" (UniqueName: \"kubernetes.io/projected/fdeef619-da62-4c86-949e-1b795bdc5de3-kube-api-access-vwlkb\") pod \"fdeef619-da62-4c86-949e-1b795bdc5de3\" (UID: \"fdeef619-da62-4c86-949e-1b795bdc5de3\") " Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.769164 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f30f9bba-77f8-4ec4-8a2c-0136424c403a-operator-scripts\") pod \"f30f9bba-77f8-4ec4-8a2c-0136424c403a\" (UID: \"f30f9bba-77f8-4ec4-8a2c-0136424c403a\") " Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.769194 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f19edf47-a4ad-4705-b1e7-b4316db0ac5f-operator-scripts\") pod \"f19edf47-a4ad-4705-b1e7-b4316db0ac5f\" (UID: \"f19edf47-a4ad-4705-b1e7-b4316db0ac5f\") " Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.769219 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvgrx\" (UniqueName: \"kubernetes.io/projected/b21bc434-c659-4153-ad06-7eb075fae022-kube-api-access-gvgrx\") pod \"b21bc434-c659-4153-ad06-7eb075fae022\" (UID: \"b21bc434-c659-4153-ad06-7eb075fae022\") " Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.769248 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzn2c\" (UniqueName: \"kubernetes.io/projected/f19edf47-a4ad-4705-b1e7-b4316db0ac5f-kube-api-access-qzn2c\") pod \"f19edf47-a4ad-4705-b1e7-b4316db0ac5f\" (UID: \"f19edf47-a4ad-4705-b1e7-b4316db0ac5f\") " Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.769303 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b21bc434-c659-4153-ad06-7eb075fae022-operator-scripts\") pod \"b21bc434-c659-4153-ad06-7eb075fae022\" (UID: \"b21bc434-c659-4153-ad06-7eb075fae022\") " Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.769369 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npmmc\" (UniqueName: \"kubernetes.io/projected/f30f9bba-77f8-4ec4-8a2c-0136424c403a-kube-api-access-npmmc\") pod \"f30f9bba-77f8-4ec4-8a2c-0136424c403a\" (UID: \"f30f9bba-77f8-4ec4-8a2c-0136424c403a\") " Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.770004 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f30f9bba-77f8-4ec4-8a2c-0136424c403a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f30f9bba-77f8-4ec4-8a2c-0136424c403a" (UID: "f30f9bba-77f8-4ec4-8a2c-0136424c403a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.770076 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdeef619-da62-4c86-949e-1b795bdc5de3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fdeef619-da62-4c86-949e-1b795bdc5de3" (UID: "fdeef619-da62-4c86-949e-1b795bdc5de3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.770217 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b21bc434-c659-4153-ad06-7eb075fae022-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b21bc434-c659-4153-ad06-7eb075fae022" (UID: "b21bc434-c659-4153-ad06-7eb075fae022"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.770251 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f19edf47-a4ad-4705-b1e7-b4316db0ac5f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f19edf47-a4ad-4705-b1e7-b4316db0ac5f" (UID: "f19edf47-a4ad-4705-b1e7-b4316db0ac5f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.770656 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f30f9bba-77f8-4ec4-8a2c-0136424c403a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.770675 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f19edf47-a4ad-4705-b1e7-b4316db0ac5f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.770684 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b21bc434-c659-4153-ad06-7eb075fae022-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.770693 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdeef619-da62-4c86-949e-1b795bdc5de3-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.777782 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f30f9bba-77f8-4ec4-8a2c-0136424c403a-kube-api-access-npmmc" (OuterVolumeSpecName: "kube-api-access-npmmc") pod "f30f9bba-77f8-4ec4-8a2c-0136424c403a" (UID: "f30f9bba-77f8-4ec4-8a2c-0136424c403a"). InnerVolumeSpecName "kube-api-access-npmmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.779249 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19edf47-a4ad-4705-b1e7-b4316db0ac5f-kube-api-access-qzn2c" (OuterVolumeSpecName: "kube-api-access-qzn2c") pod "f19edf47-a4ad-4705-b1e7-b4316db0ac5f" (UID: "f19edf47-a4ad-4705-b1e7-b4316db0ac5f"). InnerVolumeSpecName "kube-api-access-qzn2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.779362 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b21bc434-c659-4153-ad06-7eb075fae022-kube-api-access-gvgrx" (OuterVolumeSpecName: "kube-api-access-gvgrx") pod "b21bc434-c659-4153-ad06-7eb075fae022" (UID: "b21bc434-c659-4153-ad06-7eb075fae022"). InnerVolumeSpecName "kube-api-access-gvgrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.781223 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdeef619-da62-4c86-949e-1b795bdc5de3-kube-api-access-vwlkb" (OuterVolumeSpecName: "kube-api-access-vwlkb") pod "fdeef619-da62-4c86-949e-1b795bdc5de3" (UID: "fdeef619-da62-4c86-949e-1b795bdc5de3"). InnerVolumeSpecName "kube-api-access-vwlkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.871902 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwlkb\" (UniqueName: \"kubernetes.io/projected/fdeef619-da62-4c86-949e-1b795bdc5de3-kube-api-access-vwlkb\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.871928 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gvgrx\" (UniqueName: \"kubernetes.io/projected/b21bc434-c659-4153-ad06-7eb075fae022-kube-api-access-gvgrx\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.871938 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzn2c\" (UniqueName: \"kubernetes.io/projected/f19edf47-a4ad-4705-b1e7-b4316db0ac5f-kube-api-access-qzn2c\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:20 crc kubenswrapper[4731]: I1125 15:13:20.871946 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npmmc\" (UniqueName: \"kubernetes.io/projected/f30f9bba-77f8-4ec4-8a2c-0136424c403a-kube-api-access-npmmc\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:21 crc kubenswrapper[4731]: I1125 15:13:21.070674 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-s2pj9" Nov 25 15:13:21 crc kubenswrapper[4731]: I1125 15:13:21.070688 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-s2pj9" event={"ID":"fdeef619-da62-4c86-949e-1b795bdc5de3","Type":"ContainerDied","Data":"5015cc71ac16355bb4b28efdef948349b402c9c3e8f1c57b5476285d26078b8e"} Nov 25 15:13:21 crc kubenswrapper[4731]: I1125 15:13:21.070736 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5015cc71ac16355bb4b28efdef948349b402c9c3e8f1c57b5476285d26078b8e" Nov 25 15:13:21 crc kubenswrapper[4731]: I1125 15:13:21.077285 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-075e-account-create-lp2p6" Nov 25 15:13:21 crc kubenswrapper[4731]: I1125 15:13:21.077238 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-075e-account-create-lp2p6" event={"ID":"b21bc434-c659-4153-ad06-7eb075fae022","Type":"ContainerDied","Data":"6257169603ee55d319efa59b186644290501a0abc6b16fd656d320e87310aa8f"} Nov 25 15:13:21 crc kubenswrapper[4731]: I1125 15:13:21.079865 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6257169603ee55d319efa59b186644290501a0abc6b16fd656d320e87310aa8f" Nov 25 15:13:21 crc kubenswrapper[4731]: I1125 15:13:21.082053 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-7mzbq" event={"ID":"f19edf47-a4ad-4705-b1e7-b4316db0ac5f","Type":"ContainerDied","Data":"50a8a3098d5aa8c0e9bce627c9c02b600767f1f826bdbb16bc9afad77cdc4d06"} Nov 25 15:13:21 crc kubenswrapper[4731]: I1125 15:13:21.082428 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50a8a3098d5aa8c0e9bce627c9c02b600767f1f826bdbb16bc9afad77cdc4d06" Nov 25 15:13:21 crc kubenswrapper[4731]: I1125 15:13:21.082069 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-7mzbq" Nov 25 15:13:21 crc kubenswrapper[4731]: I1125 15:13:21.084205 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-13ae-account-create-49z99" Nov 25 15:13:21 crc kubenswrapper[4731]: I1125 15:13:21.084380 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-13ae-account-create-49z99" event={"ID":"f30f9bba-77f8-4ec4-8a2c-0136424c403a","Type":"ContainerDied","Data":"4bd5daa1dc747466743ebf96935014d838048e1c87e630ad8dc0cc5b4d1befbd"} Nov 25 15:13:21 crc kubenswrapper[4731]: I1125 15:13:21.086986 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bd5daa1dc747466743ebf96935014d838048e1c87e630ad8dc0cc5b4d1befbd" Nov 25 15:13:21 crc kubenswrapper[4731]: I1125 15:13:21.164938 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zxg8k"] Nov 25 15:13:22 crc kubenswrapper[4731]: I1125 15:13:22.098264 4731 generic.go:334] "Generic (PLEG): container finished" podID="8402be71-b88d-4091-a1ca-f7ceb7f8ba97" containerID="c71c9409472172c011a8303a9ee45b954237837a31dbc5a812378a1b1008eb74" exitCode=0 Nov 25 15:13:22 crc kubenswrapper[4731]: I1125 15:13:22.098362 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jmh97" event={"ID":"8402be71-b88d-4091-a1ca-f7ceb7f8ba97","Type":"ContainerDied","Data":"c71c9409472172c011a8303a9ee45b954237837a31dbc5a812378a1b1008eb74"} Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.144032 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-4917-account-create-kgths" event={"ID":"a791aaf4-bccc-458a-9072-a5f3e44a6ea6","Type":"ContainerDied","Data":"ef5c59b7c25bd06087f6be934c16bcca6687c6df6b83ce72178b9bc5d742ebd4"} Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.144530 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef5c59b7c25bd06087f6be934c16bcca6687c6df6b83ce72178b9bc5d742ebd4" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.276169 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4917-account-create-kgths" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.357938 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4a1e-account-create-znsxk" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.389419 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a791aaf4-bccc-458a-9072-a5f3e44a6ea6-operator-scripts\") pod \"a791aaf4-bccc-458a-9072-a5f3e44a6ea6\" (UID: \"a791aaf4-bccc-458a-9072-a5f3e44a6ea6\") " Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.389888 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfk6c\" (UniqueName: \"kubernetes.io/projected/a791aaf4-bccc-458a-9072-a5f3e44a6ea6-kube-api-access-dfk6c\") pod \"a791aaf4-bccc-458a-9072-a5f3e44a6ea6\" (UID: \"a791aaf4-bccc-458a-9072-a5f3e44a6ea6\") " Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.390328 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a791aaf4-bccc-458a-9072-a5f3e44a6ea6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a791aaf4-bccc-458a-9072-a5f3e44a6ea6" (UID: "a791aaf4-bccc-458a-9072-a5f3e44a6ea6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.390481 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a791aaf4-bccc-458a-9072-a5f3e44a6ea6-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.415420 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a791aaf4-bccc-458a-9072-a5f3e44a6ea6-kube-api-access-dfk6c" (OuterVolumeSpecName: "kube-api-access-dfk6c") pod "a791aaf4-bccc-458a-9072-a5f3e44a6ea6" (UID: "a791aaf4-bccc-458a-9072-a5f3e44a6ea6"). InnerVolumeSpecName "kube-api-access-dfk6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.426565 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jmh97" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.453698 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-c8qvw" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.491568 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s995k\" (UniqueName: \"kubernetes.io/projected/8f41eaec-9d8e-4a4a-bed0-2a225ace9e75-kube-api-access-s995k\") pod \"8f41eaec-9d8e-4a4a-bed0-2a225ace9e75\" (UID: \"8f41eaec-9d8e-4a4a-bed0-2a225ace9e75\") " Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.491715 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f41eaec-9d8e-4a4a-bed0-2a225ace9e75-operator-scripts\") pod \"8f41eaec-9d8e-4a4a-bed0-2a225ace9e75\" (UID: \"8f41eaec-9d8e-4a4a-bed0-2a225ace9e75\") " Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.492091 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfk6c\" (UniqueName: \"kubernetes.io/projected/a791aaf4-bccc-458a-9072-a5f3e44a6ea6-kube-api-access-dfk6c\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.492857 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f41eaec-9d8e-4a4a-bed0-2a225ace9e75-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8f41eaec-9d8e-4a4a-bed0-2a225ace9e75" (UID: "8f41eaec-9d8e-4a4a-bed0-2a225ace9e75"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.499101 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f41eaec-9d8e-4a4a-bed0-2a225ace9e75-kube-api-access-s995k" (OuterVolumeSpecName: "kube-api-access-s995k") pod "8f41eaec-9d8e-4a4a-bed0-2a225ace9e75" (UID: "8f41eaec-9d8e-4a4a-bed0-2a225ace9e75"). InnerVolumeSpecName "kube-api-access-s995k". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.592861 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-config-data\") pod \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\" (UID: \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\") " Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.592956 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e04865-a9b1-495f-b0c3-0042b35c33c2-operator-scripts\") pod \"26e04865-a9b1-495f-b0c3-0042b35c33c2\" (UID: \"26e04865-a9b1-495f-b0c3-0042b35c33c2\") " Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.593008 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nctss\" (UniqueName: \"kubernetes.io/projected/26e04865-a9b1-495f-b0c3-0042b35c33c2-kube-api-access-nctss\") pod \"26e04865-a9b1-495f-b0c3-0042b35c33c2\" (UID: \"26e04865-a9b1-495f-b0c3-0042b35c33c2\") " Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.593060 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r46xx\" (UniqueName: \"kubernetes.io/projected/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-kube-api-access-r46xx\") pod \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\" (UID: \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\") " Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.593084 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-db-sync-config-data\") pod \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\" (UID: \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\") " Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.593462 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26e04865-a9b1-495f-b0c3-0042b35c33c2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "26e04865-a9b1-495f-b0c3-0042b35c33c2" (UID: "26e04865-a9b1-495f-b0c3-0042b35c33c2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.593867 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-combined-ca-bundle\") pod \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\" (UID: \"8402be71-b88d-4091-a1ca-f7ceb7f8ba97\") " Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.594210 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s995k\" (UniqueName: \"kubernetes.io/projected/8f41eaec-9d8e-4a4a-bed0-2a225ace9e75-kube-api-access-s995k\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.594229 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8f41eaec-9d8e-4a4a-bed0-2a225ace9e75-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.594239 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/26e04865-a9b1-495f-b0c3-0042b35c33c2-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.599289 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-kube-api-access-r46xx" (OuterVolumeSpecName: "kube-api-access-r46xx") pod "8402be71-b88d-4091-a1ca-f7ceb7f8ba97" (UID: "8402be71-b88d-4091-a1ca-f7ceb7f8ba97"). InnerVolumeSpecName "kube-api-access-r46xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.599403 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e04865-a9b1-495f-b0c3-0042b35c33c2-kube-api-access-nctss" (OuterVolumeSpecName: "kube-api-access-nctss") pod "26e04865-a9b1-495f-b0c3-0042b35c33c2" (UID: "26e04865-a9b1-495f-b0c3-0042b35c33c2"). InnerVolumeSpecName "kube-api-access-nctss". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.599677 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8402be71-b88d-4091-a1ca-f7ceb7f8ba97" (UID: "8402be71-b88d-4091-a1ca-f7ceb7f8ba97"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.633325 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8402be71-b88d-4091-a1ca-f7ceb7f8ba97" (UID: "8402be71-b88d-4091-a1ca-f7ceb7f8ba97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.658919 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-config-data" (OuterVolumeSpecName: "config-data") pod "8402be71-b88d-4091-a1ca-f7ceb7f8ba97" (UID: "8402be71-b88d-4091-a1ca-f7ceb7f8ba97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.695898 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r46xx\" (UniqueName: \"kubernetes.io/projected/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-kube-api-access-r46xx\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.695937 4731 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.695948 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.695960 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8402be71-b88d-4091-a1ca-f7ceb7f8ba97-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:27 crc kubenswrapper[4731]: I1125 15:13:27.695970 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nctss\" (UniqueName: \"kubernetes.io/projected/26e04865-a9b1-495f-b0c3-0042b35c33c2-kube-api-access-nctss\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.156331 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-c8qvw" event={"ID":"26e04865-a9b1-495f-b0c3-0042b35c33c2","Type":"ContainerDied","Data":"5e6989948f1c296190a3358a6901f4fe0047a171f26efe11159719f792150b18"} Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.156386 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-c8qvw" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.156406 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e6989948f1c296190a3358a6901f4fe0047a171f26efe11159719f792150b18" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.158145 4731 generic.go:334] "Generic (PLEG): container finished" podID="d1e157c6-3f51-4df4-b6d8-b6a362266668" containerID="574f06ee0eaa069264a6d53cd93da882269242e04a1d46368f69d85b7fef1cd3" exitCode=0 Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.158203 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" event={"ID":"d1e157c6-3f51-4df4-b6d8-b6a362266668","Type":"ContainerDied","Data":"574f06ee0eaa069264a6d53cd93da882269242e04a1d46368f69d85b7fef1cd3"} Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.158220 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" event={"ID":"d1e157c6-3f51-4df4-b6d8-b6a362266668","Type":"ContainerStarted","Data":"a96e7fec57358edf18c97cdb00e1270de8f776a8851f0d471ea859dd54b528ab"} Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.161064 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jmh97" event={"ID":"8402be71-b88d-4091-a1ca-f7ceb7f8ba97","Type":"ContainerDied","Data":"65088962793e5be5157fdfb7dc01f052fe85d7ea87beed86983a846050423075"} Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.161087 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65088962793e5be5157fdfb7dc01f052fe85d7ea87beed86983a846050423075" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.161226 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jmh97" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.164723 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-4a1e-account-create-znsxk" event={"ID":"8f41eaec-9d8e-4a4a-bed0-2a225ace9e75","Type":"ContainerDied","Data":"9eec612a47689235283801be354feea94fcd5ae48db797f6c9c08734d5e65927"} Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.164749 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eec612a47689235283801be354feea94fcd5ae48db797f6c9c08734d5e65927" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.164806 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-4a1e-account-create-znsxk" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.175716 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-4917-account-create-kgths" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.176225 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-trbrn" event={"ID":"c59b5888-0ce3-4fa0-8e74-6ea06ff91390","Type":"ContainerStarted","Data":"7318225c192530846011681644b8be91cf945538e6bcbcad68cb94661455b779"} Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.216008 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-trbrn" podStartSLOduration=3.563968102 podStartE2EDuration="13.215990891s" podCreationTimestamp="2025-11-25 15:13:15 +0000 UTC" firstStartedPulling="2025-11-25 15:13:17.619625351 +0000 UTC m=+1156.366759781" lastFinishedPulling="2025-11-25 15:13:27.27164814 +0000 UTC m=+1166.018782570" observedRunningTime="2025-11-25 15:13:28.211323474 +0000 UTC m=+1166.958457904" watchObservedRunningTime="2025-11-25 15:13:28.215990891 +0000 UTC m=+1166.963125321" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.844055 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zxg8k"] Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.863624 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-7jspm"] Nov 25 15:13:28 crc kubenswrapper[4731]: E1125 15:13:28.864070 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e04865-a9b1-495f-b0c3-0042b35c33c2" containerName="mariadb-database-create" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.864088 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e04865-a9b1-495f-b0c3-0042b35c33c2" containerName="mariadb-database-create" Nov 25 15:13:28 crc kubenswrapper[4731]: E1125 15:13:28.864126 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8402be71-b88d-4091-a1ca-f7ceb7f8ba97" containerName="glance-db-sync" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.864137 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8402be71-b88d-4091-a1ca-f7ceb7f8ba97" containerName="glance-db-sync" Nov 25 15:13:28 crc kubenswrapper[4731]: E1125 15:13:28.864155 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19edf47-a4ad-4705-b1e7-b4316db0ac5f" containerName="mariadb-database-create" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.864162 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19edf47-a4ad-4705-b1e7-b4316db0ac5f" containerName="mariadb-database-create" Nov 25 15:13:28 crc kubenswrapper[4731]: E1125 15:13:28.864175 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a791aaf4-bccc-458a-9072-a5f3e44a6ea6" containerName="mariadb-account-create" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.864183 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a791aaf4-bccc-458a-9072-a5f3e44a6ea6" containerName="mariadb-account-create" Nov 25 15:13:28 crc kubenswrapper[4731]: E1125 15:13:28.864196 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdeef619-da62-4c86-949e-1b795bdc5de3" containerName="mariadb-database-create" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.864203 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdeef619-da62-4c86-949e-1b795bdc5de3" containerName="mariadb-database-create" Nov 25 15:13:28 crc kubenswrapper[4731]: E1125 15:13:28.864211 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b21bc434-c659-4153-ad06-7eb075fae022" containerName="mariadb-account-create" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.864218 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b21bc434-c659-4153-ad06-7eb075fae022" containerName="mariadb-account-create" Nov 25 15:13:28 crc kubenswrapper[4731]: E1125 15:13:28.864231 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f30f9bba-77f8-4ec4-8a2c-0136424c403a" containerName="mariadb-account-create" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.864237 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f30f9bba-77f8-4ec4-8a2c-0136424c403a" containerName="mariadb-account-create" Nov 25 15:13:28 crc kubenswrapper[4731]: E1125 15:13:28.864261 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f41eaec-9d8e-4a4a-bed0-2a225ace9e75" containerName="mariadb-account-create" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.864269 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f41eaec-9d8e-4a4a-bed0-2a225ace9e75" containerName="mariadb-account-create" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.864484 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19edf47-a4ad-4705-b1e7-b4316db0ac5f" containerName="mariadb-database-create" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.864501 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e04865-a9b1-495f-b0c3-0042b35c33c2" containerName="mariadb-database-create" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.864515 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b21bc434-c659-4153-ad06-7eb075fae022" containerName="mariadb-account-create" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.864527 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdeef619-da62-4c86-949e-1b795bdc5de3" containerName="mariadb-database-create" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.864541 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f30f9bba-77f8-4ec4-8a2c-0136424c403a" containerName="mariadb-account-create" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.864553 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a791aaf4-bccc-458a-9072-a5f3e44a6ea6" containerName="mariadb-account-create" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.864566 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="8402be71-b88d-4091-a1ca-f7ceb7f8ba97" containerName="glance-db-sync" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.864576 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f41eaec-9d8e-4a4a-bed0-2a225ace9e75" containerName="mariadb-account-create" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.866995 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:28 crc kubenswrapper[4731]: I1125 15:13:28.877700 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-7jspm"] Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.020321 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.020398 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4qp7\" (UniqueName: \"kubernetes.io/projected/6b14446a-c8de-419e-b8a5-a761ac712a97-kube-api-access-m4qp7\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.020484 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-config\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.020508 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.020546 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.020589 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.122123 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.122184 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4qp7\" (UniqueName: \"kubernetes.io/projected/6b14446a-c8de-419e-b8a5-a761ac712a97-kube-api-access-m4qp7\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.122240 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-config\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.122255 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.122280 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.122307 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.123129 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.123723 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.124616 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-config\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.125384 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.126142 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.143000 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4qp7\" (UniqueName: \"kubernetes.io/projected/6b14446a-c8de-419e-b8a5-a761ac712a97-kube-api-access-m4qp7\") pod \"dnsmasq-dns-5f59b8f679-7jspm\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.189399 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.193208 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" event={"ID":"d1e157c6-3f51-4df4-b6d8-b6a362266668","Type":"ContainerStarted","Data":"25c34085a76002f498da44a589f223290c2131cc995a172422796d6e7ad8bd62"} Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.193273 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.230231 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" podStartSLOduration=9.230207214 podStartE2EDuration="9.230207214s" podCreationTimestamp="2025-11-25 15:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:13:29.21809188 +0000 UTC m=+1167.965226310" watchObservedRunningTime="2025-11-25 15:13:29.230207214 +0000 UTC m=+1167.977341664" Nov 25 15:13:29 crc kubenswrapper[4731]: I1125 15:13:29.689077 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-7jspm"] Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.207201 4731 generic.go:334] "Generic (PLEG): container finished" podID="6b14446a-c8de-419e-b8a5-a761ac712a97" containerID="cd9c6156dd33fc3d0eed192f2013012937521d874c748f604944e8896f385cc1" exitCode=0 Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.207274 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" event={"ID":"6b14446a-c8de-419e-b8a5-a761ac712a97","Type":"ContainerDied","Data":"cd9c6156dd33fc3d0eed192f2013012937521d874c748f604944e8896f385cc1"} Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.207684 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" event={"ID":"6b14446a-c8de-419e-b8a5-a761ac712a97","Type":"ContainerStarted","Data":"bf28270ab4446b946c00fb885638a112ffe18ae6435f79e8362a0e44be97fc2a"} Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.207912 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" podUID="d1e157c6-3f51-4df4-b6d8-b6a362266668" containerName="dnsmasq-dns" containerID="cri-o://25c34085a76002f498da44a589f223290c2131cc995a172422796d6e7ad8bd62" gracePeriod=10 Nov 25 15:13:30 crc kubenswrapper[4731]: E1125 15:13:30.399636 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1e157c6_3f51_4df4_b6d8_b6a362266668.slice/crio-conmon-25c34085a76002f498da44a589f223290c2131cc995a172422796d6e7ad8bd62.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.606289 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.755628 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjbmg\" (UniqueName: \"kubernetes.io/projected/d1e157c6-3f51-4df4-b6d8-b6a362266668-kube-api-access-fjbmg\") pod \"d1e157c6-3f51-4df4-b6d8-b6a362266668\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.755737 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-dns-swift-storage-0\") pod \"d1e157c6-3f51-4df4-b6d8-b6a362266668\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.755791 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-ovsdbserver-nb\") pod \"d1e157c6-3f51-4df4-b6d8-b6a362266668\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.755810 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-dns-svc\") pod \"d1e157c6-3f51-4df4-b6d8-b6a362266668\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.755859 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-ovsdbserver-sb\") pod \"d1e157c6-3f51-4df4-b6d8-b6a362266668\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.755932 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-config\") pod \"d1e157c6-3f51-4df4-b6d8-b6a362266668\" (UID: \"d1e157c6-3f51-4df4-b6d8-b6a362266668\") " Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.760256 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1e157c6-3f51-4df4-b6d8-b6a362266668-kube-api-access-fjbmg" (OuterVolumeSpecName: "kube-api-access-fjbmg") pod "d1e157c6-3f51-4df4-b6d8-b6a362266668" (UID: "d1e157c6-3f51-4df4-b6d8-b6a362266668"). InnerVolumeSpecName "kube-api-access-fjbmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.796074 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d1e157c6-3f51-4df4-b6d8-b6a362266668" (UID: "d1e157c6-3f51-4df4-b6d8-b6a362266668"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.811101 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1e157c6-3f51-4df4-b6d8-b6a362266668" (UID: "d1e157c6-3f51-4df4-b6d8-b6a362266668"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.812471 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d1e157c6-3f51-4df4-b6d8-b6a362266668" (UID: "d1e157c6-3f51-4df4-b6d8-b6a362266668"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.819413 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-config" (OuterVolumeSpecName: "config") pod "d1e157c6-3f51-4df4-b6d8-b6a362266668" (UID: "d1e157c6-3f51-4df4-b6d8-b6a362266668"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.826182 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d1e157c6-3f51-4df4-b6d8-b6a362266668" (UID: "d1e157c6-3f51-4df4-b6d8-b6a362266668"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.857905 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.857938 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.857951 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.857961 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.857972 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1e157c6-3f51-4df4-b6d8-b6a362266668-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:30 crc kubenswrapper[4731]: I1125 15:13:30.857983 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjbmg\" (UniqueName: \"kubernetes.io/projected/d1e157c6-3f51-4df4-b6d8-b6a362266668-kube-api-access-fjbmg\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:31 crc kubenswrapper[4731]: I1125 15:13:31.223787 4731 generic.go:334] "Generic (PLEG): container finished" podID="d1e157c6-3f51-4df4-b6d8-b6a362266668" containerID="25c34085a76002f498da44a589f223290c2131cc995a172422796d6e7ad8bd62" exitCode=0 Nov 25 15:13:31 crc kubenswrapper[4731]: I1125 15:13:31.223851 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" event={"ID":"d1e157c6-3f51-4df4-b6d8-b6a362266668","Type":"ContainerDied","Data":"25c34085a76002f498da44a589f223290c2131cc995a172422796d6e7ad8bd62"} Nov 25 15:13:31 crc kubenswrapper[4731]: I1125 15:13:31.223918 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" Nov 25 15:13:31 crc kubenswrapper[4731]: I1125 15:13:31.224338 4731 scope.go:117] "RemoveContainer" containerID="25c34085a76002f498da44a589f223290c2131cc995a172422796d6e7ad8bd62" Nov 25 15:13:31 crc kubenswrapper[4731]: I1125 15:13:31.224316 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-zxg8k" event={"ID":"d1e157c6-3f51-4df4-b6d8-b6a362266668","Type":"ContainerDied","Data":"a96e7fec57358edf18c97cdb00e1270de8f776a8851f0d471ea859dd54b528ab"} Nov 25 15:13:31 crc kubenswrapper[4731]: I1125 15:13:31.231036 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" event={"ID":"6b14446a-c8de-419e-b8a5-a761ac712a97","Type":"ContainerStarted","Data":"8da681468dcad9052a238ba2bdeaa57b197e6883f2e01a426af8d20d946cf8ea"} Nov 25 15:13:31 crc kubenswrapper[4731]: I1125 15:13:31.231325 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:31 crc kubenswrapper[4731]: I1125 15:13:31.275436 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" podStartSLOduration=3.27541377 podStartE2EDuration="3.27541377s" podCreationTimestamp="2025-11-25 15:13:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:13:31.267926322 +0000 UTC m=+1170.015060792" watchObservedRunningTime="2025-11-25 15:13:31.27541377 +0000 UTC m=+1170.022548210" Nov 25 15:13:31 crc kubenswrapper[4731]: I1125 15:13:31.276552 4731 scope.go:117] "RemoveContainer" containerID="574f06ee0eaa069264a6d53cd93da882269242e04a1d46368f69d85b7fef1cd3" Nov 25 15:13:31 crc kubenswrapper[4731]: I1125 15:13:31.293940 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zxg8k"] Nov 25 15:13:31 crc kubenswrapper[4731]: I1125 15:13:31.307159 4731 scope.go:117] "RemoveContainer" containerID="25c34085a76002f498da44a589f223290c2131cc995a172422796d6e7ad8bd62" Nov 25 15:13:31 crc kubenswrapper[4731]: E1125 15:13:31.307729 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25c34085a76002f498da44a589f223290c2131cc995a172422796d6e7ad8bd62\": container with ID starting with 25c34085a76002f498da44a589f223290c2131cc995a172422796d6e7ad8bd62 not found: ID does not exist" containerID="25c34085a76002f498da44a589f223290c2131cc995a172422796d6e7ad8bd62" Nov 25 15:13:31 crc kubenswrapper[4731]: I1125 15:13:31.307769 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c34085a76002f498da44a589f223290c2131cc995a172422796d6e7ad8bd62"} err="failed to get container status \"25c34085a76002f498da44a589f223290c2131cc995a172422796d6e7ad8bd62\": rpc error: code = NotFound desc = could not find container \"25c34085a76002f498da44a589f223290c2131cc995a172422796d6e7ad8bd62\": container with ID starting with 25c34085a76002f498da44a589f223290c2131cc995a172422796d6e7ad8bd62 not found: ID does not exist" Nov 25 15:13:31 crc kubenswrapper[4731]: I1125 15:13:31.307792 4731 scope.go:117] "RemoveContainer" containerID="574f06ee0eaa069264a6d53cd93da882269242e04a1d46368f69d85b7fef1cd3" Nov 25 15:13:31 crc kubenswrapper[4731]: E1125 15:13:31.308244 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"574f06ee0eaa069264a6d53cd93da882269242e04a1d46368f69d85b7fef1cd3\": container with ID starting with 574f06ee0eaa069264a6d53cd93da882269242e04a1d46368f69d85b7fef1cd3 not found: ID does not exist" containerID="574f06ee0eaa069264a6d53cd93da882269242e04a1d46368f69d85b7fef1cd3" Nov 25 15:13:31 crc kubenswrapper[4731]: I1125 15:13:31.308276 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"574f06ee0eaa069264a6d53cd93da882269242e04a1d46368f69d85b7fef1cd3"} err="failed to get container status \"574f06ee0eaa069264a6d53cd93da882269242e04a1d46368f69d85b7fef1cd3\": rpc error: code = NotFound desc = could not find container \"574f06ee0eaa069264a6d53cd93da882269242e04a1d46368f69d85b7fef1cd3\": container with ID starting with 574f06ee0eaa069264a6d53cd93da882269242e04a1d46368f69d85b7fef1cd3 not found: ID does not exist" Nov 25 15:13:31 crc kubenswrapper[4731]: I1125 15:13:31.312144 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-zxg8k"] Nov 25 15:13:32 crc kubenswrapper[4731]: I1125 15:13:32.242963 4731 generic.go:334] "Generic (PLEG): container finished" podID="c59b5888-0ce3-4fa0-8e74-6ea06ff91390" containerID="7318225c192530846011681644b8be91cf945538e6bcbcad68cb94661455b779" exitCode=0 Nov 25 15:13:32 crc kubenswrapper[4731]: I1125 15:13:32.243101 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-trbrn" event={"ID":"c59b5888-0ce3-4fa0-8e74-6ea06ff91390","Type":"ContainerDied","Data":"7318225c192530846011681644b8be91cf945538e6bcbcad68cb94661455b779"} Nov 25 15:13:32 crc kubenswrapper[4731]: I1125 15:13:32.819766 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1e157c6-3f51-4df4-b6d8-b6a362266668" path="/var/lib/kubelet/pods/d1e157c6-3f51-4df4-b6d8-b6a362266668/volumes" Nov 25 15:13:33 crc kubenswrapper[4731]: I1125 15:13:33.611362 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-trbrn" Nov 25 15:13:33 crc kubenswrapper[4731]: I1125 15:13:33.704475 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-config-data\") pod \"c59b5888-0ce3-4fa0-8e74-6ea06ff91390\" (UID: \"c59b5888-0ce3-4fa0-8e74-6ea06ff91390\") " Nov 25 15:13:33 crc kubenswrapper[4731]: I1125 15:13:33.704617 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-combined-ca-bundle\") pod \"c59b5888-0ce3-4fa0-8e74-6ea06ff91390\" (UID: \"c59b5888-0ce3-4fa0-8e74-6ea06ff91390\") " Nov 25 15:13:33 crc kubenswrapper[4731]: I1125 15:13:33.704647 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh87g\" (UniqueName: \"kubernetes.io/projected/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-kube-api-access-mh87g\") pod \"c59b5888-0ce3-4fa0-8e74-6ea06ff91390\" (UID: \"c59b5888-0ce3-4fa0-8e74-6ea06ff91390\") " Nov 25 15:13:33 crc kubenswrapper[4731]: I1125 15:13:33.709564 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-kube-api-access-mh87g" (OuterVolumeSpecName: "kube-api-access-mh87g") pod "c59b5888-0ce3-4fa0-8e74-6ea06ff91390" (UID: "c59b5888-0ce3-4fa0-8e74-6ea06ff91390"). InnerVolumeSpecName "kube-api-access-mh87g". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:33 crc kubenswrapper[4731]: I1125 15:13:33.727516 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c59b5888-0ce3-4fa0-8e74-6ea06ff91390" (UID: "c59b5888-0ce3-4fa0-8e74-6ea06ff91390"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:13:33 crc kubenswrapper[4731]: I1125 15:13:33.746950 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-config-data" (OuterVolumeSpecName: "config-data") pod "c59b5888-0ce3-4fa0-8e74-6ea06ff91390" (UID: "c59b5888-0ce3-4fa0-8e74-6ea06ff91390"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:13:33 crc kubenswrapper[4731]: I1125 15:13:33.806495 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:33 crc kubenswrapper[4731]: I1125 15:13:33.806527 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:33 crc kubenswrapper[4731]: I1125 15:13:33.806539 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh87g\" (UniqueName: \"kubernetes.io/projected/c59b5888-0ce3-4fa0-8e74-6ea06ff91390-kube-api-access-mh87g\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.261054 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-trbrn" event={"ID":"c59b5888-0ce3-4fa0-8e74-6ea06ff91390","Type":"ContainerDied","Data":"d9592fd43ad11b9e221cca3fb3eb09c37f21f8a2b2b4ec32499cac5eef340aae"} Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.261094 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9592fd43ad11b9e221cca3fb3eb09c37f21f8a2b2b4ec32499cac5eef340aae" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.261114 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-trbrn" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.536526 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-7jspm"] Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.536860 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" podUID="6b14446a-c8de-419e-b8a5-a761ac712a97" containerName="dnsmasq-dns" containerID="cri-o://8da681468dcad9052a238ba2bdeaa57b197e6883f2e01a426af8d20d946cf8ea" gracePeriod=10 Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.557676 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-8pgjj"] Nov 25 15:13:34 crc kubenswrapper[4731]: E1125 15:13:34.558399 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c59b5888-0ce3-4fa0-8e74-6ea06ff91390" containerName="keystone-db-sync" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.558428 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c59b5888-0ce3-4fa0-8e74-6ea06ff91390" containerName="keystone-db-sync" Nov 25 15:13:34 crc kubenswrapper[4731]: E1125 15:13:34.558448 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1e157c6-3f51-4df4-b6d8-b6a362266668" containerName="dnsmasq-dns" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.558458 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e157c6-3f51-4df4-b6d8-b6a362266668" containerName="dnsmasq-dns" Nov 25 15:13:34 crc kubenswrapper[4731]: E1125 15:13:34.558484 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1e157c6-3f51-4df4-b6d8-b6a362266668" containerName="init" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.558497 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1e157c6-3f51-4df4-b6d8-b6a362266668" containerName="init" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.558796 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1e157c6-3f51-4df4-b6d8-b6a362266668" containerName="dnsmasq-dns" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.558850 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59b5888-0ce3-4fa0-8e74-6ea06ff91390" containerName="keystone-db-sync" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.559798 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.562898 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.563218 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.563359 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.563548 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4547k" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.563666 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.581828 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8pgjj"] Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.600887 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-v9lkt"] Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.618087 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.637944 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-fernet-keys\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.638048 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-combined-ca-bundle\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.638299 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-config-data\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.638328 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-scripts\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.638400 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zrhl\" (UniqueName: \"kubernetes.io/projected/b0d8668c-960f-40c9-a458-0827ec36137e-kube-api-access-6zrhl\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.638693 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-credential-keys\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.753346 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-config\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.753400 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7br9\" (UniqueName: \"kubernetes.io/projected/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-kube-api-access-l7br9\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.753451 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-fernet-keys\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.753476 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-combined-ca-bundle\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.753515 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.753535 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.753567 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-config-data\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.753594 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-scripts\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.753618 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zrhl\" (UniqueName: \"kubernetes.io/projected/b0d8668c-960f-40c9-a458-0827ec36137e-kube-api-access-6zrhl\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.753645 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-credential-keys\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.753675 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.753696 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.755588 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-v9lkt"] Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.770757 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-scripts\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.776359 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-fernet-keys\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.777235 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-credential-keys\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.785841 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-config-data\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.791627 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-combined-ca-bundle\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.835731 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zrhl\" (UniqueName: \"kubernetes.io/projected/b0d8668c-960f-40c9-a458-0827ec36137e-kube-api-access-6zrhl\") pod \"keystone-bootstrap-8pgjj\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.855415 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-c55qz"] Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.856671 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.856798 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.856973 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.857050 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.857164 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-config\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.857236 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7br9\" (UniqueName: \"kubernetes.io/projected/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-kube-api-access-l7br9\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.858150 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.858777 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.858924 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.859003 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.859192 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-config\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.870309 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-c55qz" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.874126 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-4zk5x" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.875285 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.884755 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7br9\" (UniqueName: \"kubernetes.io/projected/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-kube-api-access-l7br9\") pod \"dnsmasq-dns-bbf5cc879-v9lkt\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.911719 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-c55qz"] Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.929992 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-dv2l4"] Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.931364 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.935428 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.935799 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.936045 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7ql8z" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.942442 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dv2l4"] Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.961345 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-n6xcb"] Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.962205 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-config-data\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.962277 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-combined-ca-bundle\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.962320 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-etc-machine-id\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.962342 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vpkf\" (UniqueName: \"kubernetes.io/projected/4bf94950-c565-46c7-95b1-9a62e2915979-kube-api-access-9vpkf\") pod \"heat-db-sync-c55qz\" (UID: \"4bf94950-c565-46c7-95b1-9a62e2915979\") " pod="openstack/heat-db-sync-c55qz" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.962372 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-scripts\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.962410 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf94950-c565-46c7-95b1-9a62e2915979-combined-ca-bundle\") pod \"heat-db-sync-c55qz\" (UID: \"4bf94950-c565-46c7-95b1-9a62e2915979\") " pod="openstack/heat-db-sync-c55qz" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.962449 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-db-sync-config-data\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.962466 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf94950-c565-46c7-95b1-9a62e2915979-config-data\") pod \"heat-db-sync-c55qz\" (UID: \"4bf94950-c565-46c7-95b1-9a62e2915979\") " pod="openstack/heat-db-sync-c55qz" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.962501 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5wpl\" (UniqueName: \"kubernetes.io/projected/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-kube-api-access-w5wpl\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.963412 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-n6xcb" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.975829 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.979185 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-w9gd8" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.979725 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.979908 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.995631 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-n6xcb"] Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.995896 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:13:34 crc kubenswrapper[4731]: I1125 15:13:34.997759 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.007175 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.030944 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-vs6g5"] Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.032973 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.038422 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.045486 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rshp2" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.045909 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.046116 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.046289 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-kplpn"] Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.047713 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kplpn" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.055697 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.056752 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.057563 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.062116 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9ffpv" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.064356 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-scripts\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.066669 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e845d406-36a7-4202-b123-51efd30331b8-log-httpd\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.066709 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-scripts\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.066742 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf94950-c565-46c7-95b1-9a62e2915979-combined-ca-bundle\") pod \"heat-db-sync-c55qz\" (UID: \"4bf94950-c565-46c7-95b1-9a62e2915979\") " pod="openstack/heat-db-sync-c55qz" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.066763 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e845d406-36a7-4202-b123-51efd30331b8-run-httpd\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091298 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdzwf\" (UniqueName: \"kubernetes.io/projected/e845d406-36a7-4202-b123-51efd30331b8-kube-api-access-wdzwf\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091345 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8836cab0-6606-45e6-adb7-2797a9e7d022-combined-ca-bundle\") pod \"neutron-db-sync-n6xcb\" (UID: \"8836cab0-6606-45e6-adb7-2797a9e7d022\") " pod="openstack/neutron-db-sync-n6xcb" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091384 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-db-sync-config-data\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091378 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-vs6g5"] Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091412 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf94950-c565-46c7-95b1-9a62e2915979-config-data\") pod \"heat-db-sync-c55qz\" (UID: \"4bf94950-c565-46c7-95b1-9a62e2915979\") " pod="openstack/heat-db-sync-c55qz" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091451 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-config-data\") pod \"placement-db-sync-vs6g5\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091484 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-config-data\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091539 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5wpl\" (UniqueName: \"kubernetes.io/projected/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-kube-api-access-w5wpl\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091599 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5g6w\" (UniqueName: \"kubernetes.io/projected/8836cab0-6606-45e6-adb7-2797a9e7d022-kube-api-access-z5g6w\") pod \"neutron-db-sync-n6xcb\" (UID: \"8836cab0-6606-45e6-adb7-2797a9e7d022\") " pod="openstack/neutron-db-sync-n6xcb" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091624 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091661 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8836cab0-6606-45e6-adb7-2797a9e7d022-config\") pod \"neutron-db-sync-n6xcb\" (UID: \"8836cab0-6606-45e6-adb7-2797a9e7d022\") " pod="openstack/neutron-db-sync-n6xcb" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091721 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-combined-ca-bundle\") pod \"placement-db-sync-vs6g5\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091746 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-scripts\") pod \"placement-db-sync-vs6g5\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091778 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-config-data\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091830 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcxdz\" (UniqueName: \"kubernetes.io/projected/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-kube-api-access-qcxdz\") pod \"placement-db-sync-vs6g5\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091870 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091893 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-logs\") pod \"placement-db-sync-vs6g5\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091922 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-combined-ca-bundle\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.091980 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-etc-machine-id\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.092008 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vpkf\" (UniqueName: \"kubernetes.io/projected/4bf94950-c565-46c7-95b1-9a62e2915979-kube-api-access-9vpkf\") pod \"heat-db-sync-c55qz\" (UID: \"4bf94950-c565-46c7-95b1-9a62e2915979\") " pod="openstack/heat-db-sync-c55qz" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.092942 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-etc-machine-id\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.115084 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-combined-ca-bundle\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.126753 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kplpn"] Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.154599 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf94950-c565-46c7-95b1-9a62e2915979-combined-ca-bundle\") pod \"heat-db-sync-c55qz\" (UID: \"4bf94950-c565-46c7-95b1-9a62e2915979\") " pod="openstack/heat-db-sync-c55qz" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.158357 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vpkf\" (UniqueName: \"kubernetes.io/projected/4bf94950-c565-46c7-95b1-9a62e2915979-kube-api-access-9vpkf\") pod \"heat-db-sync-c55qz\" (UID: \"4bf94950-c565-46c7-95b1-9a62e2915979\") " pod="openstack/heat-db-sync-c55qz" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.162737 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf94950-c565-46c7-95b1-9a62e2915979-config-data\") pod \"heat-db-sync-c55qz\" (UID: \"4bf94950-c565-46c7-95b1-9a62e2915979\") " pod="openstack/heat-db-sync-c55qz" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.163589 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-db-sync-config-data\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.163665 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-config-data\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.174527 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-scripts\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.186172 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5wpl\" (UniqueName: \"kubernetes.io/projected/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-kube-api-access-w5wpl\") pod \"cinder-db-sync-dv2l4\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.186287 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-v9lkt"] Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.227633 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p49g6"] Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.231725 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p49g6"] Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.231991 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.244871 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-c55qz" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.246081 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5g6w\" (UniqueName: \"kubernetes.io/projected/8836cab0-6606-45e6-adb7-2797a9e7d022-kube-api-access-z5g6w\") pod \"neutron-db-sync-n6xcb\" (UID: \"8836cab0-6606-45e6-adb7-2797a9e7d022\") " pod="openstack/neutron-db-sync-n6xcb" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.249576 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.249685 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8836cab0-6606-45e6-adb7-2797a9e7d022-config\") pod \"neutron-db-sync-n6xcb\" (UID: \"8836cab0-6606-45e6-adb7-2797a9e7d022\") " pod="openstack/neutron-db-sync-n6xcb" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.249849 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-combined-ca-bundle\") pod \"placement-db-sync-vs6g5\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.249924 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-scripts\") pod \"placement-db-sync-vs6g5\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.250056 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfgsf\" (UniqueName: \"kubernetes.io/projected/b5547f3b-298a-4906-8c60-1f51c87c1059-kube-api-access-wfgsf\") pod \"barbican-db-sync-kplpn\" (UID: \"b5547f3b-298a-4906-8c60-1f51c87c1059\") " pod="openstack/barbican-db-sync-kplpn" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.250146 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qcxdz\" (UniqueName: \"kubernetes.io/projected/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-kube-api-access-qcxdz\") pod \"placement-db-sync-vs6g5\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.250229 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b5547f3b-298a-4906-8c60-1f51c87c1059-db-sync-config-data\") pod \"barbican-db-sync-kplpn\" (UID: \"b5547f3b-298a-4906-8c60-1f51c87c1059\") " pod="openstack/barbican-db-sync-kplpn" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.250320 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.250407 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-logs\") pod \"placement-db-sync-vs6g5\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.250660 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e845d406-36a7-4202-b123-51efd30331b8-log-httpd\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.250737 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5547f3b-298a-4906-8c60-1f51c87c1059-combined-ca-bundle\") pod \"barbican-db-sync-kplpn\" (UID: \"b5547f3b-298a-4906-8c60-1f51c87c1059\") " pod="openstack/barbican-db-sync-kplpn" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.250848 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-scripts\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.250925 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e845d406-36a7-4202-b123-51efd30331b8-run-httpd\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.251085 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdzwf\" (UniqueName: \"kubernetes.io/projected/e845d406-36a7-4202-b123-51efd30331b8-kube-api-access-wdzwf\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.251167 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8836cab0-6606-45e6-adb7-2797a9e7d022-combined-ca-bundle\") pod \"neutron-db-sync-n6xcb\" (UID: \"8836cab0-6606-45e6-adb7-2797a9e7d022\") " pod="openstack/neutron-db-sync-n6xcb" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.251262 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-config-data\") pod \"placement-db-sync-vs6g5\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.251441 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-config-data\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.255349 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e845d406-36a7-4202-b123-51efd30331b8-log-httpd\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.255874 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-logs\") pod \"placement-db-sync-vs6g5\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.276567 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e845d406-36a7-4202-b123-51efd30331b8-run-httpd\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.282039 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.286063 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-config-data\") pod \"placement-db-sync-vs6g5\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.286432 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5g6w\" (UniqueName: \"kubernetes.io/projected/8836cab0-6606-45e6-adb7-2797a9e7d022-kube-api-access-z5g6w\") pod \"neutron-db-sync-n6xcb\" (UID: \"8836cab0-6606-45e6-adb7-2797a9e7d022\") " pod="openstack/neutron-db-sync-n6xcb" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.289865 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-scripts\") pod \"placement-db-sync-vs6g5\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.290054 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8836cab0-6606-45e6-adb7-2797a9e7d022-combined-ca-bundle\") pod \"neutron-db-sync-n6xcb\" (UID: \"8836cab0-6606-45e6-adb7-2797a9e7d022\") " pod="openstack/neutron-db-sync-n6xcb" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.290667 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-config-data\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.294711 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdzwf\" (UniqueName: \"kubernetes.io/projected/e845d406-36a7-4202-b123-51efd30331b8-kube-api-access-wdzwf\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.298695 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-scripts\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.299127 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.299662 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.302450 4731 generic.go:334] "Generic (PLEG): container finished" podID="6b14446a-c8de-419e-b8a5-a761ac712a97" containerID="8da681468dcad9052a238ba2bdeaa57b197e6883f2e01a426af8d20d946cf8ea" exitCode=0 Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.302591 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" event={"ID":"6b14446a-c8de-419e-b8a5-a761ac712a97","Type":"ContainerDied","Data":"8da681468dcad9052a238ba2bdeaa57b197e6883f2e01a426af8d20d946cf8ea"} Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.303648 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8836cab0-6606-45e6-adb7-2797a9e7d022-config\") pod \"neutron-db-sync-n6xcb\" (UID: \"8836cab0-6606-45e6-adb7-2797a9e7d022\") " pod="openstack/neutron-db-sync-n6xcb" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.312376 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qcxdz\" (UniqueName: \"kubernetes.io/projected/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-kube-api-access-qcxdz\") pod \"placement-db-sync-vs6g5\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.327559 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-combined-ca-bundle\") pod \"placement-db-sync-vs6g5\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.332862 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-n6xcb" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.353470 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.354095 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5547f3b-298a-4906-8c60-1f51c87c1059-combined-ca-bundle\") pod \"barbican-db-sync-kplpn\" (UID: \"b5547f3b-298a-4906-8c60-1f51c87c1059\") " pod="openstack/barbican-db-sync-kplpn" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.354168 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.354205 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66k6l\" (UniqueName: \"kubernetes.io/projected/0f353937-4c45-4332-abdb-a187c7c7101d-kube-api-access-66k6l\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.354244 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.354306 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.354423 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfgsf\" (UniqueName: \"kubernetes.io/projected/b5547f3b-298a-4906-8c60-1f51c87c1059-kube-api-access-wfgsf\") pod \"barbican-db-sync-kplpn\" (UID: \"b5547f3b-298a-4906-8c60-1f51c87c1059\") " pod="openstack/barbican-db-sync-kplpn" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.354447 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-config\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.354479 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b5547f3b-298a-4906-8c60-1f51c87c1059-db-sync-config-data\") pod \"barbican-db-sync-kplpn\" (UID: \"b5547f3b-298a-4906-8c60-1f51c87c1059\") " pod="openstack/barbican-db-sync-kplpn" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.354525 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.357805 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b5547f3b-298a-4906-8c60-1f51c87c1059-db-sync-config-data\") pod \"barbican-db-sync-kplpn\" (UID: \"b5547f3b-298a-4906-8c60-1f51c87c1059\") " pod="openstack/barbican-db-sync-kplpn" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.363129 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5547f3b-298a-4906-8c60-1f51c87c1059-combined-ca-bundle\") pod \"barbican-db-sync-kplpn\" (UID: \"b5547f3b-298a-4906-8c60-1f51c87c1059\") " pod="openstack/barbican-db-sync-kplpn" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.392572 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vs6g5" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.393718 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfgsf\" (UniqueName: \"kubernetes.io/projected/b5547f3b-298a-4906-8c60-1f51c87c1059-kube-api-access-wfgsf\") pod \"barbican-db-sync-kplpn\" (UID: \"b5547f3b-298a-4906-8c60-1f51c87c1059\") " pod="openstack/barbican-db-sync-kplpn" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.449196 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.465083 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.465217 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.465305 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-config\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.465376 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.465448 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kplpn" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.465507 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.465531 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66k6l\" (UniqueName: \"kubernetes.io/projected/0f353937-4c45-4332-abdb-a187c7c7101d-kube-api-access-66k6l\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.465996 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.466060 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.468204 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.477614 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-config\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.483108 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.501448 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66k6l\" (UniqueName: \"kubernetes.io/projected/0f353937-4c45-4332-abdb-a187c7c7101d-kube-api-access-66k6l\") pod \"dnsmasq-dns-56df8fb6b7-p49g6\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.571100 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-dns-svc\") pod \"6b14446a-c8de-419e-b8a5-a761ac712a97\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.571133 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-ovsdbserver-nb\") pod \"6b14446a-c8de-419e-b8a5-a761ac712a97\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.571152 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-dns-swift-storage-0\") pod \"6b14446a-c8de-419e-b8a5-a761ac712a97\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.571247 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-config\") pod \"6b14446a-c8de-419e-b8a5-a761ac712a97\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.571279 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4qp7\" (UniqueName: \"kubernetes.io/projected/6b14446a-c8de-419e-b8a5-a761ac712a97-kube-api-access-m4qp7\") pod \"6b14446a-c8de-419e-b8a5-a761ac712a97\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.571301 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-ovsdbserver-sb\") pod \"6b14446a-c8de-419e-b8a5-a761ac712a97\" (UID: \"6b14446a-c8de-419e-b8a5-a761ac712a97\") " Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.613104 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b14446a-c8de-419e-b8a5-a761ac712a97-kube-api-access-m4qp7" (OuterVolumeSpecName: "kube-api-access-m4qp7") pod "6b14446a-c8de-419e-b8a5-a761ac712a97" (UID: "6b14446a-c8de-419e-b8a5-a761ac712a97"). InnerVolumeSpecName "kube-api-access-m4qp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:35 crc kubenswrapper[4731]: I1125 15:13:35.622054 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.674588 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4qp7\" (UniqueName: \"kubernetes.io/projected/6b14446a-c8de-419e-b8a5-a761ac712a97-kube-api-access-m4qp7\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.691529 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:13:36 crc kubenswrapper[4731]: E1125 15:13:35.691946 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b14446a-c8de-419e-b8a5-a761ac712a97" containerName="init" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.691958 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b14446a-c8de-419e-b8a5-a761ac712a97" containerName="init" Nov 25 15:13:36 crc kubenswrapper[4731]: E1125 15:13:35.691969 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b14446a-c8de-419e-b8a5-a761ac712a97" containerName="dnsmasq-dns" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.691975 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b14446a-c8de-419e-b8a5-a761ac712a97" containerName="dnsmasq-dns" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.692168 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b14446a-c8de-419e-b8a5-a761ac712a97" containerName="dnsmasq-dns" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.693143 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.702086 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6b14446a-c8de-419e-b8a5-a761ac712a97" (UID: "6b14446a-c8de-419e-b8a5-a761ac712a97"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.702704 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.702844 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.703009 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.703401 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-lxlch" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.716036 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6b14446a-c8de-419e-b8a5-a761ac712a97" (UID: "6b14446a-c8de-419e-b8a5-a761ac712a97"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.718215 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.768395 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6b14446a-c8de-419e-b8a5-a761ac712a97" (UID: "6b14446a-c8de-419e-b8a5-a761ac712a97"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.777132 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.777166 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.777180 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.779768 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.781585 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.786386 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-config" (OuterVolumeSpecName: "config") pod "6b14446a-c8de-419e-b8a5-a761ac712a97" (UID: "6b14446a-c8de-419e-b8a5-a761ac712a97"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.789046 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.789321 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.792134 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6b14446a-c8de-419e-b8a5-a761ac712a97" (UID: "6b14446a-c8de-419e-b8a5-a761ac712a97"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.827923 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.876211 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-8pgjj"] Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.878553 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a65199b3-dcd3-480f-9968-d6a0df2e8178-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.878597 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.878645 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.891984 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a65199b3-dcd3-480f-9968-d6a0df2e8178-logs\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.892068 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.893596 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksck8\" (UniqueName: \"kubernetes.io/projected/a65199b3-dcd3-480f-9968-d6a0df2e8178-kube-api-access-ksck8\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.893800 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.893913 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.894086 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.894103 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b14446a-c8de-419e-b8a5-a761ac712a97-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:36 crc kubenswrapper[4731]: W1125 15:13:35.927399 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc76996cb_9b49_4cbe_8cd4_e92c6bc0dc60.slice/crio-5009715c1262ab731376a40a865ab07324b9890d1cfdf3ad58217d5179238f36 WatchSource:0}: Error finding container 5009715c1262ab731376a40a865ab07324b9890d1cfdf3ad58217d5179238f36: Status 404 returned error can't find the container with id 5009715c1262ab731376a40a865ab07324b9890d1cfdf3ad58217d5179238f36 Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.931484 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-v9lkt"] Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.995683 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a65199b3-dcd3-480f-9968-d6a0df2e8178-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.996022 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-scripts\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.996048 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.996083 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhlbf\" (UniqueName: \"kubernetes.io/projected/96bacdfa-848f-4d9f-80eb-ba476fec752d-kube-api-access-nhlbf\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.996111 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96bacdfa-848f-4d9f-80eb-ba476fec752d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.996131 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.996157 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.996212 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a65199b3-dcd3-480f-9968-d6a0df2e8178-logs\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.996258 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.996285 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.996310 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-config-data\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.996329 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksck8\" (UniqueName: \"kubernetes.io/projected/a65199b3-dcd3-480f-9968-d6a0df2e8178-kube-api-access-ksck8\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.996358 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.996380 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.996405 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.996429 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bacdfa-848f-4d9f-80eb-ba476fec752d-logs\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.998026 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:35.999335 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a65199b3-dcd3-480f-9968-d6a0df2e8178-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.000171 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a65199b3-dcd3-480f-9968-d6a0df2e8178-logs\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.007543 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-config-data\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.007885 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-scripts\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.010523 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.019953 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksck8\" (UniqueName: \"kubernetes.io/projected/a65199b3-dcd3-480f-9968-d6a0df2e8178-kube-api-access-ksck8\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.027284 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.048332 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.098688 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-scripts\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.098766 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhlbf\" (UniqueName: \"kubernetes.io/projected/96bacdfa-848f-4d9f-80eb-ba476fec752d-kube-api-access-nhlbf\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.098795 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96bacdfa-848f-4d9f-80eb-ba476fec752d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.098898 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.098978 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.099022 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-config-data\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.099073 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.099127 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bacdfa-848f-4d9f-80eb-ba476fec752d-logs\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.100117 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.100333 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96bacdfa-848f-4d9f-80eb-ba476fec752d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.100997 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bacdfa-848f-4d9f-80eb-ba476fec752d-logs\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.105546 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.107657 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.110165 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-scripts\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.115686 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-config-data\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.127519 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhlbf\" (UniqueName: \"kubernetes.io/projected/96bacdfa-848f-4d9f-80eb-ba476fec752d-kube-api-access-nhlbf\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.144820 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.303916 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.317765 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8pgjj" event={"ID":"b0d8668c-960f-40c9-a458-0827ec36137e","Type":"ContainerStarted","Data":"89a22c4d247f5526ce06a33106ecd5bc7d54951c969b755187c6430b1b108898"} Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.320508 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" event={"ID":"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60","Type":"ContainerStarted","Data":"5009715c1262ab731376a40a865ab07324b9890d1cfdf3ad58217d5179238f36"} Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.328259 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" event={"ID":"6b14446a-c8de-419e-b8a5-a761ac712a97","Type":"ContainerDied","Data":"bf28270ab4446b946c00fb885638a112ffe18ae6435f79e8362a0e44be97fc2a"} Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.328335 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-7jspm" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.328341 4731 scope.go:117] "RemoveContainer" containerID="8da681468dcad9052a238ba2bdeaa57b197e6883f2e01a426af8d20d946cf8ea" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.346191 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.358378 4731 scope.go:117] "RemoveContainer" containerID="cd9c6156dd33fc3d0eed192f2013012937521d874c748f604944e8896f385cc1" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.374516 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-7jspm"] Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.388392 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-7jspm"] Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.822274 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b14446a-c8de-419e-b8a5-a761ac712a97" path="/var/lib/kubelet/pods/6b14446a-c8de-419e-b8a5-a761ac712a97/volumes" Nov 25 15:13:36 crc kubenswrapper[4731]: I1125 15:13:36.991803 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-dv2l4"] Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.019006 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-c55qz"] Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.214809 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-kplpn"] Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.224896 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:13:37 crc kubenswrapper[4731]: W1125 15:13:37.234307 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb5547f3b_298a_4906_8c60_1f51c87c1059.slice/crio-9e07e34b88b0c33a4a85314f836ff489427e11081724293b5db71b6783089fd4 WatchSource:0}: Error finding container 9e07e34b88b0c33a4a85314f836ff489427e11081724293b5db71b6783089fd4: Status 404 returned error can't find the container with id 9e07e34b88b0c33a4a85314f836ff489427e11081724293b5db71b6783089fd4 Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.241753 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-n6xcb"] Nov 25 15:13:37 crc kubenswrapper[4731]: W1125 15:13:37.282638 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode845d406_36a7_4202_b123_51efd30331b8.slice/crio-0295c9f3171a0360ee965a71fbc9bfc6a87bb341c0cad0974861a2509783d61f WatchSource:0}: Error finding container 0295c9f3171a0360ee965a71fbc9bfc6a87bb341c0cad0974861a2509783d61f: Status 404 returned error can't find the container with id 0295c9f3171a0360ee965a71fbc9bfc6a87bb341c0cad0974861a2509783d61f Nov 25 15:13:37 crc kubenswrapper[4731]: W1125 15:13:37.283021 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06c14d90_d1d0_43ce_b4a2_3b36dad72a8e.slice/crio-b6b2badae8c3f06a6fe3df3e9e3da6dfad73c0b751bb1f9bb93bf0a1e2332ae2 WatchSource:0}: Error finding container b6b2badae8c3f06a6fe3df3e9e3da6dfad73c0b751bb1f9bb93bf0a1e2332ae2: Status 404 returned error can't find the container with id b6b2badae8c3f06a6fe3df3e9e3da6dfad73c0b751bb1f9bb93bf0a1e2332ae2 Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.296234 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.362959 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-vs6g5"] Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.364303 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vs6g5" event={"ID":"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e","Type":"ContainerStarted","Data":"b6b2badae8c3f06a6fe3df3e9e3da6dfad73c0b751bb1f9bb93bf0a1e2332ae2"} Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.387155 4731 generic.go:334] "Generic (PLEG): container finished" podID="c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60" containerID="ca982aeb4ebe7973845bd170e8a941fe9ba169143e9406d33d8ca7580b269ef3" exitCode=0 Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.387219 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" event={"ID":"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60","Type":"ContainerDied","Data":"ca982aeb4ebe7973845bd170e8a941fe9ba169143e9406d33d8ca7580b269ef3"} Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.389226 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.429578 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kplpn" event={"ID":"b5547f3b-298a-4906-8c60-1f51c87c1059","Type":"ContainerStarted","Data":"9e07e34b88b0c33a4a85314f836ff489427e11081724293b5db71b6783089fd4"} Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.431131 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-n6xcb" event={"ID":"8836cab0-6606-45e6-adb7-2797a9e7d022","Type":"ContainerStarted","Data":"f4bc6494d82b3d5d4a602ef116d898759ac2c3ad0b0fa3e6af5ce1e7570826ce"} Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.456752 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dv2l4" event={"ID":"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79","Type":"ContainerStarted","Data":"223179c58712b5c516aba895b5f0cfd45cab345636a75bb481155e2d58d9fe96"} Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.468477 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8pgjj" event={"ID":"b0d8668c-960f-40c9-a458-0827ec36137e","Type":"ContainerStarted","Data":"351cec3bc637851ecfcb7e696eae85cb1280812e1ccb194b5f480b514ef89fd8"} Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.469407 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e845d406-36a7-4202-b123-51efd30331b8","Type":"ContainerStarted","Data":"0295c9f3171a0360ee965a71fbc9bfc6a87bb341c0cad0974861a2509783d61f"} Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.484174 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.504310 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-8pgjj" podStartSLOduration=3.504293565 podStartE2EDuration="3.504293565s" podCreationTimestamp="2025-11-25 15:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:13:37.501234338 +0000 UTC m=+1176.248368768" watchObservedRunningTime="2025-11-25 15:13:37.504293565 +0000 UTC m=+1176.251427995" Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.511313 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-c55qz" event={"ID":"4bf94950-c565-46c7-95b1-9a62e2915979","Type":"ContainerStarted","Data":"fea2a259ff31ea02c177ae7088be85ba945938c748850662d97162110ecc8249"} Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.633942 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p49g6"] Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.770500 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:13:37 crc kubenswrapper[4731]: I1125 15:13:37.978609 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.153013 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7br9\" (UniqueName: \"kubernetes.io/projected/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-kube-api-access-l7br9\") pod \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.154464 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-dns-svc\") pod \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.154505 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-config\") pod \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.154566 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-dns-swift-storage-0\") pod \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.155003 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-ovsdbserver-sb\") pod \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.155087 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-ovsdbserver-nb\") pod \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\" (UID: \"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60\") " Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.164940 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-kube-api-access-l7br9" (OuterVolumeSpecName: "kube-api-access-l7br9") pod "c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60" (UID: "c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60"). InnerVolumeSpecName "kube-api-access-l7br9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.181177 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60" (UID: "c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.184166 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60" (UID: "c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.188705 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60" (UID: "c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.190606 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60" (UID: "c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.199316 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-config" (OuterVolumeSpecName: "config") pod "c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60" (UID: "c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.258881 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.258911 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7br9\" (UniqueName: \"kubernetes.io/projected/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-kube-api-access-l7br9\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.258937 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.258946 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.258955 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.258963 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.478332 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:13:38 crc kubenswrapper[4731]: W1125 15:13:38.526398 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96bacdfa_848f_4d9f_80eb_ba476fec752d.slice/crio-79ce2a1e3729fd9f4e19dd5fdfcba4e4d9943e9b7575ca57923af98e24b9cf0e WatchSource:0}: Error finding container 79ce2a1e3729fd9f4e19dd5fdfcba4e4d9943e9b7575ca57923af98e24b9cf0e: Status 404 returned error can't find the container with id 79ce2a1e3729fd9f4e19dd5fdfcba4e4d9943e9b7575ca57923af98e24b9cf0e Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.528544 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" event={"ID":"c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60","Type":"ContainerDied","Data":"5009715c1262ab731376a40a865ab07324b9890d1cfdf3ad58217d5179238f36"} Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.528583 4731 scope.go:117] "RemoveContainer" containerID="ca982aeb4ebe7973845bd170e8a941fe9ba169143e9406d33d8ca7580b269ef3" Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.528691 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-v9lkt" Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.551526 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-n6xcb" event={"ID":"8836cab0-6606-45e6-adb7-2797a9e7d022","Type":"ContainerStarted","Data":"32e56b99379bffd1ca4c6d2daf598c435fb0e4739aacdef3d7f1868f24cbd2c0"} Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.561109 4731 generic.go:334] "Generic (PLEG): container finished" podID="0f353937-4c45-4332-abdb-a187c7c7101d" containerID="532b6144532f12922a4e312b89b6e271e61bb852b554356ae482c133ae817198" exitCode=0 Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.561191 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" event={"ID":"0f353937-4c45-4332-abdb-a187c7c7101d","Type":"ContainerDied","Data":"532b6144532f12922a4e312b89b6e271e61bb852b554356ae482c133ae817198"} Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.561218 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" event={"ID":"0f353937-4c45-4332-abdb-a187c7c7101d","Type":"ContainerStarted","Data":"3c689ae7bc3fd336f0040bf15e0c9c11aa860653663598f545395f33bbf42bc2"} Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.585922 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a65199b3-dcd3-480f-9968-d6a0df2e8178","Type":"ContainerStarted","Data":"ae2ee0e68b25c92c1b65d35c9e1201370cee8861f9c363191d42ee6fbe6e5aed"} Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.641798 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-v9lkt"] Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.656811 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-v9lkt"] Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.675242 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-n6xcb" podStartSLOduration=4.675222975 podStartE2EDuration="4.675222975s" podCreationTimestamp="2025-11-25 15:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:13:38.623048937 +0000 UTC m=+1177.370183367" watchObservedRunningTime="2025-11-25 15:13:38.675222975 +0000 UTC m=+1177.422357405" Nov 25 15:13:38 crc kubenswrapper[4731]: I1125 15:13:38.831134 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60" path="/var/lib/kubelet/pods/c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60/volumes" Nov 25 15:13:39 crc kubenswrapper[4731]: I1125 15:13:39.595541 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"96bacdfa-848f-4d9f-80eb-ba476fec752d","Type":"ContainerStarted","Data":"c9cf5881cde6c6e171e4b8db25d9e0eef4842ba28732bc4e791d863d6c6b0b55"} Nov 25 15:13:39 crc kubenswrapper[4731]: I1125 15:13:39.596019 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"96bacdfa-848f-4d9f-80eb-ba476fec752d","Type":"ContainerStarted","Data":"79ce2a1e3729fd9f4e19dd5fdfcba4e4d9943e9b7575ca57923af98e24b9cf0e"} Nov 25 15:13:39 crc kubenswrapper[4731]: I1125 15:13:39.599272 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" event={"ID":"0f353937-4c45-4332-abdb-a187c7c7101d","Type":"ContainerStarted","Data":"386abf86a2cdbca038f2e9784944fa7f034d4409435c0f386a902bb88e7ee3d6"} Nov 25 15:13:39 crc kubenswrapper[4731]: I1125 15:13:39.600512 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:39 crc kubenswrapper[4731]: I1125 15:13:39.638498 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" podStartSLOduration=4.638197213 podStartE2EDuration="4.638197213s" podCreationTimestamp="2025-11-25 15:13:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:13:39.63527052 +0000 UTC m=+1178.382404960" watchObservedRunningTime="2025-11-25 15:13:39.638197213 +0000 UTC m=+1178.385331643" Nov 25 15:13:39 crc kubenswrapper[4731]: I1125 15:13:39.648689 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a65199b3-dcd3-480f-9968-d6a0df2e8178","Type":"ContainerStarted","Data":"8e115eb1c923d259cc0c45e0a3bbb8e2a696b950d948f8b201c60427a111af68"} Nov 25 15:13:40 crc kubenswrapper[4731]: I1125 15:13:40.683096 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"96bacdfa-848f-4d9f-80eb-ba476fec752d","Type":"ContainerStarted","Data":"b3a857f70985c9584454e1aa3f5710b20ec84cb08860f98b69cb3c3dbc817a12"} Nov 25 15:13:40 crc kubenswrapper[4731]: I1125 15:13:40.683219 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="96bacdfa-848f-4d9f-80eb-ba476fec752d" containerName="glance-log" containerID="cri-o://c9cf5881cde6c6e171e4b8db25d9e0eef4842ba28732bc4e791d863d6c6b0b55" gracePeriod=30 Nov 25 15:13:40 crc kubenswrapper[4731]: I1125 15:13:40.683628 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="96bacdfa-848f-4d9f-80eb-ba476fec752d" containerName="glance-httpd" containerID="cri-o://b3a857f70985c9584454e1aa3f5710b20ec84cb08860f98b69cb3c3dbc817a12" gracePeriod=30 Nov 25 15:13:40 crc kubenswrapper[4731]: I1125 15:13:40.691397 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a65199b3-dcd3-480f-9968-d6a0df2e8178","Type":"ContainerStarted","Data":"2097f669c239666a85c68627aef11b456c23fe477f562ddf01539745b8bd919c"} Nov 25 15:13:40 crc kubenswrapper[4731]: I1125 15:13:40.691597 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a65199b3-dcd3-480f-9968-d6a0df2e8178" containerName="glance-log" containerID="cri-o://8e115eb1c923d259cc0c45e0a3bbb8e2a696b950d948f8b201c60427a111af68" gracePeriod=30 Nov 25 15:13:40 crc kubenswrapper[4731]: I1125 15:13:40.691761 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="a65199b3-dcd3-480f-9968-d6a0df2e8178" containerName="glance-httpd" containerID="cri-o://2097f669c239666a85c68627aef11b456c23fe477f562ddf01539745b8bd919c" gracePeriod=30 Nov 25 15:13:40 crc kubenswrapper[4731]: I1125 15:13:40.720971 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=6.7209466540000005 podStartE2EDuration="6.720946654s" podCreationTimestamp="2025-11-25 15:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:13:40.705853075 +0000 UTC m=+1179.452987525" watchObservedRunningTime="2025-11-25 15:13:40.720946654 +0000 UTC m=+1179.468081094" Nov 25 15:13:41 crc kubenswrapper[4731]: I1125 15:13:41.731638 4731 generic.go:334] "Generic (PLEG): container finished" podID="b0d8668c-960f-40c9-a458-0827ec36137e" containerID="351cec3bc637851ecfcb7e696eae85cb1280812e1ccb194b5f480b514ef89fd8" exitCode=0 Nov 25 15:13:41 crc kubenswrapper[4731]: I1125 15:13:41.732033 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8pgjj" event={"ID":"b0d8668c-960f-40c9-a458-0827ec36137e","Type":"ContainerDied","Data":"351cec3bc637851ecfcb7e696eae85cb1280812e1ccb194b5f480b514ef89fd8"} Nov 25 15:13:41 crc kubenswrapper[4731]: I1125 15:13:41.746299 4731 generic.go:334] "Generic (PLEG): container finished" podID="96bacdfa-848f-4d9f-80eb-ba476fec752d" containerID="b3a857f70985c9584454e1aa3f5710b20ec84cb08860f98b69cb3c3dbc817a12" exitCode=143 Nov 25 15:13:41 crc kubenswrapper[4731]: I1125 15:13:41.746335 4731 generic.go:334] "Generic (PLEG): container finished" podID="96bacdfa-848f-4d9f-80eb-ba476fec752d" containerID="c9cf5881cde6c6e171e4b8db25d9e0eef4842ba28732bc4e791d863d6c6b0b55" exitCode=143 Nov 25 15:13:41 crc kubenswrapper[4731]: I1125 15:13:41.746388 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"96bacdfa-848f-4d9f-80eb-ba476fec752d","Type":"ContainerDied","Data":"b3a857f70985c9584454e1aa3f5710b20ec84cb08860f98b69cb3c3dbc817a12"} Nov 25 15:13:41 crc kubenswrapper[4731]: I1125 15:13:41.746422 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"96bacdfa-848f-4d9f-80eb-ba476fec752d","Type":"ContainerDied","Data":"c9cf5881cde6c6e171e4b8db25d9e0eef4842ba28732bc4e791d863d6c6b0b55"} Nov 25 15:13:41 crc kubenswrapper[4731]: I1125 15:13:41.756549 4731 generic.go:334] "Generic (PLEG): container finished" podID="a65199b3-dcd3-480f-9968-d6a0df2e8178" containerID="2097f669c239666a85c68627aef11b456c23fe477f562ddf01539745b8bd919c" exitCode=143 Nov 25 15:13:41 crc kubenswrapper[4731]: I1125 15:13:41.756579 4731 generic.go:334] "Generic (PLEG): container finished" podID="a65199b3-dcd3-480f-9968-d6a0df2e8178" containerID="8e115eb1c923d259cc0c45e0a3bbb8e2a696b950d948f8b201c60427a111af68" exitCode=143 Nov 25 15:13:41 crc kubenswrapper[4731]: I1125 15:13:41.757219 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a65199b3-dcd3-480f-9968-d6a0df2e8178","Type":"ContainerDied","Data":"2097f669c239666a85c68627aef11b456c23fe477f562ddf01539745b8bd919c"} Nov 25 15:13:41 crc kubenswrapper[4731]: I1125 15:13:41.757248 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a65199b3-dcd3-480f-9968-d6a0df2e8178","Type":"ContainerDied","Data":"8e115eb1c923d259cc0c45e0a3bbb8e2a696b950d948f8b201c60427a111af68"} Nov 25 15:13:41 crc kubenswrapper[4731]: I1125 15:13:41.766255 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.766240005 podStartE2EDuration="7.766240005s" podCreationTimestamp="2025-11-25 15:13:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:13:40.732202196 +0000 UTC m=+1179.479336626" watchObservedRunningTime="2025-11-25 15:13:41.766240005 +0000 UTC m=+1180.513374435" Nov 25 15:13:45 crc kubenswrapper[4731]: I1125 15:13:45.623598 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:13:45 crc kubenswrapper[4731]: I1125 15:13:45.702550 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-2swq5"] Nov 25 15:13:45 crc kubenswrapper[4731]: I1125 15:13:45.702849 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" podUID="e032f375-15e6-4cbb-adaf-ec7187abecba" containerName="dnsmasq-dns" containerID="cri-o://5e7598b1563f22146a5642a2b91b284e0289259a66cd71559737621209651d0c" gracePeriod=10 Nov 25 15:13:46 crc kubenswrapper[4731]: I1125 15:13:46.255437 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" podUID="e032f375-15e6-4cbb-adaf-ec7187abecba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Nov 25 15:13:46 crc kubenswrapper[4731]: I1125 15:13:46.800848 4731 generic.go:334] "Generic (PLEG): container finished" podID="e032f375-15e6-4cbb-adaf-ec7187abecba" containerID="5e7598b1563f22146a5642a2b91b284e0289259a66cd71559737621209651d0c" exitCode=0 Nov 25 15:13:46 crc kubenswrapper[4731]: I1125 15:13:46.800992 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" event={"ID":"e032f375-15e6-4cbb-adaf-ec7187abecba","Type":"ContainerDied","Data":"5e7598b1563f22146a5642a2b91b284e0289259a66cd71559737621209651d0c"} Nov 25 15:13:50 crc kubenswrapper[4731]: I1125 15:13:50.040262 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:13:50 crc kubenswrapper[4731]: I1125 15:13:50.040653 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:13:50 crc kubenswrapper[4731]: I1125 15:13:50.040702 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 15:13:50 crc kubenswrapper[4731]: I1125 15:13:50.041377 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f9bc0406706a4644ebe665a4db06c99d1f92fdd1f784b0fafdbe78a287d513a2"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:13:50 crc kubenswrapper[4731]: I1125 15:13:50.041430 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://f9bc0406706a4644ebe665a4db06c99d1f92fdd1f784b0fafdbe78a287d513a2" gracePeriod=600 Nov 25 15:13:50 crc kubenswrapper[4731]: I1125 15:13:50.836302 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="f9bc0406706a4644ebe665a4db06c99d1f92fdd1f784b0fafdbe78a287d513a2" exitCode=0 Nov 25 15:13:50 crc kubenswrapper[4731]: I1125 15:13:50.836389 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"f9bc0406706a4644ebe665a4db06c99d1f92fdd1f784b0fafdbe78a287d513a2"} Nov 25 15:13:50 crc kubenswrapper[4731]: I1125 15:13:50.836650 4731 scope.go:117] "RemoveContainer" containerID="0c9a20a237355e5d71b9ca12829a2374c42e2593ae50b43c8d42515f2f362b09" Nov 25 15:13:51 crc kubenswrapper[4731]: I1125 15:13:51.254841 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" podUID="e032f375-15e6-4cbb-adaf-ec7187abecba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.254983 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" podUID="e032f375-15e6-4cbb-adaf-ec7187abecba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: connect: connection refused" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.255641 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.397474 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.549305 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-fernet-keys\") pod \"b0d8668c-960f-40c9-a458-0827ec36137e\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.549385 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zrhl\" (UniqueName: \"kubernetes.io/projected/b0d8668c-960f-40c9-a458-0827ec36137e-kube-api-access-6zrhl\") pod \"b0d8668c-960f-40c9-a458-0827ec36137e\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.549471 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-scripts\") pod \"b0d8668c-960f-40c9-a458-0827ec36137e\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.550358 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-combined-ca-bundle\") pod \"b0d8668c-960f-40c9-a458-0827ec36137e\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.550425 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-config-data\") pod \"b0d8668c-960f-40c9-a458-0827ec36137e\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.550502 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-credential-keys\") pod \"b0d8668c-960f-40c9-a458-0827ec36137e\" (UID: \"b0d8668c-960f-40c9-a458-0827ec36137e\") " Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.555837 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d8668c-960f-40c9-a458-0827ec36137e-kube-api-access-6zrhl" (OuterVolumeSpecName: "kube-api-access-6zrhl") pod "b0d8668c-960f-40c9-a458-0827ec36137e" (UID: "b0d8668c-960f-40c9-a458-0827ec36137e"). InnerVolumeSpecName "kube-api-access-6zrhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.556545 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-scripts" (OuterVolumeSpecName: "scripts") pod "b0d8668c-960f-40c9-a458-0827ec36137e" (UID: "b0d8668c-960f-40c9-a458-0827ec36137e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.558354 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b0d8668c-960f-40c9-a458-0827ec36137e" (UID: "b0d8668c-960f-40c9-a458-0827ec36137e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.558975 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b0d8668c-960f-40c9-a458-0827ec36137e" (UID: "b0d8668c-960f-40c9-a458-0827ec36137e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.580801 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0d8668c-960f-40c9-a458-0827ec36137e" (UID: "b0d8668c-960f-40c9-a458-0827ec36137e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.597084 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-config-data" (OuterVolumeSpecName: "config-data") pod "b0d8668c-960f-40c9-a458-0827ec36137e" (UID: "b0d8668c-960f-40c9-a458-0827ec36137e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.652915 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.652961 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.652976 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.652988 4731 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.652997 4731 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0d8668c-960f-40c9-a458-0827ec36137e-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.653008 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zrhl\" (UniqueName: \"kubernetes.io/projected/b0d8668c-960f-40c9-a458-0827ec36137e-kube-api-access-6zrhl\") on node \"crc\" DevicePath \"\"" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.896651 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-8pgjj" event={"ID":"b0d8668c-960f-40c9-a458-0827ec36137e","Type":"ContainerDied","Data":"89a22c4d247f5526ce06a33106ecd5bc7d54951c969b755187c6430b1b108898"} Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.896690 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89a22c4d247f5526ce06a33106ecd5bc7d54951c969b755187c6430b1b108898" Nov 25 15:13:56 crc kubenswrapper[4731]: I1125 15:13:56.896747 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-8pgjj" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.507028 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-8pgjj"] Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.516777 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-8pgjj"] Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.574383 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-fcglk"] Nov 25 15:13:57 crc kubenswrapper[4731]: E1125 15:13:57.574901 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60" containerName="init" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.574917 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60" containerName="init" Nov 25 15:13:57 crc kubenswrapper[4731]: E1125 15:13:57.574932 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d8668c-960f-40c9-a458-0827ec36137e" containerName="keystone-bootstrap" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.574938 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d8668c-960f-40c9-a458-0827ec36137e" containerName="keystone-bootstrap" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.575093 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d8668c-960f-40c9-a458-0827ec36137e" containerName="keystone-bootstrap" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.575107 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="c76996cb-9b49-4cbe-8cd4-e92c6bc0dc60" containerName="init" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.575713 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.578346 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-4547k" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.579008 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.579420 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.579622 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.580619 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.589677 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fcglk"] Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.672344 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-credential-keys\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.672651 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-fernet-keys\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.672766 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2xhx\" (UniqueName: \"kubernetes.io/projected/b0a9b890-4390-4543-ad98-6e8fd9c0d627-kube-api-access-v2xhx\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.672913 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-config-data\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.672984 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-scripts\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.673178 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-combined-ca-bundle\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.774891 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2xhx\" (UniqueName: \"kubernetes.io/projected/b0a9b890-4390-4543-ad98-6e8fd9c0d627-kube-api-access-v2xhx\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.774958 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-config-data\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.775004 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-scripts\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.775049 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-combined-ca-bundle\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.775089 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-credential-keys\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.775146 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-fernet-keys\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.864341 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-scripts\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.866698 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-credential-keys\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.873657 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-fernet-keys\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.873940 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2xhx\" (UniqueName: \"kubernetes.io/projected/b0a9b890-4390-4543-ad98-6e8fd9c0d627-kube-api-access-v2xhx\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.878762 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-combined-ca-bundle\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:57 crc kubenswrapper[4731]: I1125 15:13:57.881958 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-config-data\") pod \"keystone-bootstrap-fcglk\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:58 crc kubenswrapper[4731]: I1125 15:13:58.154505 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:13:58 crc kubenswrapper[4731]: I1125 15:13:58.821128 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d8668c-960f-40c9-a458-0827ec36137e" path="/var/lib/kubelet/pods/b0d8668c-960f-40c9-a458-0827ec36137e/volumes" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.632644 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.705765 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksck8\" (UniqueName: \"kubernetes.io/projected/a65199b3-dcd3-480f-9968-d6a0df2e8178-kube-api-access-ksck8\") pod \"a65199b3-dcd3-480f-9968-d6a0df2e8178\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.706123 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"a65199b3-dcd3-480f-9968-d6a0df2e8178\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.706200 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-combined-ca-bundle\") pod \"a65199b3-dcd3-480f-9968-d6a0df2e8178\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.706274 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-internal-tls-certs\") pod \"a65199b3-dcd3-480f-9968-d6a0df2e8178\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.706321 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-config-data\") pod \"a65199b3-dcd3-480f-9968-d6a0df2e8178\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.706348 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-scripts\") pod \"a65199b3-dcd3-480f-9968-d6a0df2e8178\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.706384 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a65199b3-dcd3-480f-9968-d6a0df2e8178-httpd-run\") pod \"a65199b3-dcd3-480f-9968-d6a0df2e8178\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.706465 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a65199b3-dcd3-480f-9968-d6a0df2e8178-logs\") pod \"a65199b3-dcd3-480f-9968-d6a0df2e8178\" (UID: \"a65199b3-dcd3-480f-9968-d6a0df2e8178\") " Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.707474 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a65199b3-dcd3-480f-9968-d6a0df2e8178-logs" (OuterVolumeSpecName: "logs") pod "a65199b3-dcd3-480f-9968-d6a0df2e8178" (UID: "a65199b3-dcd3-480f-9968-d6a0df2e8178"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.707718 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a65199b3-dcd3-480f-9968-d6a0df2e8178-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a65199b3-dcd3-480f-9968-d6a0df2e8178" (UID: "a65199b3-dcd3-480f-9968-d6a0df2e8178"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.733543 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "a65199b3-dcd3-480f-9968-d6a0df2e8178" (UID: "a65199b3-dcd3-480f-9968-d6a0df2e8178"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.738253 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a65199b3-dcd3-480f-9968-d6a0df2e8178-kube-api-access-ksck8" (OuterVolumeSpecName: "kube-api-access-ksck8") pod "a65199b3-dcd3-480f-9968-d6a0df2e8178" (UID: "a65199b3-dcd3-480f-9968-d6a0df2e8178"). InnerVolumeSpecName "kube-api-access-ksck8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.739962 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-scripts" (OuterVolumeSpecName: "scripts") pod "a65199b3-dcd3-480f-9968-d6a0df2e8178" (UID: "a65199b3-dcd3-480f-9968-d6a0df2e8178"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.753351 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a65199b3-dcd3-480f-9968-d6a0df2e8178" (UID: "a65199b3-dcd3-480f-9968-d6a0df2e8178"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.791966 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a65199b3-dcd3-480f-9968-d6a0df2e8178" (UID: "a65199b3-dcd3-480f-9968-d6a0df2e8178"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.793103 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-config-data" (OuterVolumeSpecName: "config-data") pod "a65199b3-dcd3-480f-9968-d6a0df2e8178" (UID: "a65199b3-dcd3-480f-9968-d6a0df2e8178"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.808200 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a65199b3-dcd3-480f-9968-d6a0df2e8178-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.808227 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksck8\" (UniqueName: \"kubernetes.io/projected/a65199b3-dcd3-480f-9968-d6a0df2e8178-kube-api-access-ksck8\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.808263 4731 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.808274 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.808282 4731 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.808290 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.808298 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a65199b3-dcd3-480f-9968-d6a0df2e8178-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.808305 4731 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a65199b3-dcd3-480f-9968-d6a0df2e8178-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.826934 4731 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.910279 4731 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.977301 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"a65199b3-dcd3-480f-9968-d6a0df2e8178","Type":"ContainerDied","Data":"ae2ee0e68b25c92c1b65d35c9e1201370cee8861f9c363191d42ee6fbe6e5aed"} Nov 25 15:14:05 crc kubenswrapper[4731]: I1125 15:14:05.977360 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.032316 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.046012 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.066869 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:14:06 crc kubenswrapper[4731]: E1125 15:14:06.067393 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65199b3-dcd3-480f-9968-d6a0df2e8178" containerName="glance-httpd" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.067414 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65199b3-dcd3-480f-9968-d6a0df2e8178" containerName="glance-httpd" Nov 25 15:14:06 crc kubenswrapper[4731]: E1125 15:14:06.067428 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a65199b3-dcd3-480f-9968-d6a0df2e8178" containerName="glance-log" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.067435 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a65199b3-dcd3-480f-9968-d6a0df2e8178" containerName="glance-log" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.067664 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a65199b3-dcd3-480f-9968-d6a0df2e8178" containerName="glance-log" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.067700 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a65199b3-dcd3-480f-9968-d6a0df2e8178" containerName="glance-httpd" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.068739 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.074375 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.074607 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.086498 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.218987 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.219079 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.219109 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.219132 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.219162 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b46ae25-5a68-4181-a69e-c1620fd2c060-logs\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.219207 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27ztg\" (UniqueName: \"kubernetes.io/projected/4b46ae25-5a68-4181-a69e-c1620fd2c060-kube-api-access-27ztg\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.219494 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.219597 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b46ae25-5a68-4181-a69e-c1620fd2c060-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.256216 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" podUID="e032f375-15e6-4cbb-adaf-ec7187abecba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.304960 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.305080 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.321545 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b46ae25-5a68-4181-a69e-c1620fd2c060-logs\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.321594 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27ztg\" (UniqueName: \"kubernetes.io/projected/4b46ae25-5a68-4181-a69e-c1620fd2c060-kube-api-access-27ztg\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.321673 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.321715 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b46ae25-5a68-4181-a69e-c1620fd2c060-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.321770 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.321848 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.321872 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.321875 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.321892 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.322099 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b46ae25-5a68-4181-a69e-c1620fd2c060-logs\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.322393 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b46ae25-5a68-4181-a69e-c1620fd2c060-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.328318 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.328502 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.328847 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.329608 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.339802 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27ztg\" (UniqueName: \"kubernetes.io/projected/4b46ae25-5a68-4181-a69e-c1620fd2c060-kube-api-access-27ztg\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.345483 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.407981 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: E1125 15:14:06.437582 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Nov 25 15:14:06 crc kubenswrapper[4731]: E1125 15:14:06.437713 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfgsf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-kplpn_openstack(b5547f3b-298a-4906-8c60-1f51c87c1059): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:14:06 crc kubenswrapper[4731]: E1125 15:14:06.438775 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-kplpn" podUID="b5547f3b-298a-4906-8c60-1f51c87c1059" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.438914 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.523686 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-combined-ca-bundle\") pod \"96bacdfa-848f-4d9f-80eb-ba476fec752d\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.523799 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhlbf\" (UniqueName: \"kubernetes.io/projected/96bacdfa-848f-4d9f-80eb-ba476fec752d-kube-api-access-nhlbf\") pod \"96bacdfa-848f-4d9f-80eb-ba476fec752d\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.523847 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-public-tls-certs\") pod \"96bacdfa-848f-4d9f-80eb-ba476fec752d\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.523868 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"96bacdfa-848f-4d9f-80eb-ba476fec752d\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.523907 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-config-data\") pod \"96bacdfa-848f-4d9f-80eb-ba476fec752d\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.523925 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-scripts\") pod \"96bacdfa-848f-4d9f-80eb-ba476fec752d\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.524003 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96bacdfa-848f-4d9f-80eb-ba476fec752d-httpd-run\") pod \"96bacdfa-848f-4d9f-80eb-ba476fec752d\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.524104 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bacdfa-848f-4d9f-80eb-ba476fec752d-logs\") pod \"96bacdfa-848f-4d9f-80eb-ba476fec752d\" (UID: \"96bacdfa-848f-4d9f-80eb-ba476fec752d\") " Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.524988 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96bacdfa-848f-4d9f-80eb-ba476fec752d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "96bacdfa-848f-4d9f-80eb-ba476fec752d" (UID: "96bacdfa-848f-4d9f-80eb-ba476fec752d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.525008 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96bacdfa-848f-4d9f-80eb-ba476fec752d-logs" (OuterVolumeSpecName: "logs") pod "96bacdfa-848f-4d9f-80eb-ba476fec752d" (UID: "96bacdfa-848f-4d9f-80eb-ba476fec752d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.528389 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-scripts" (OuterVolumeSpecName: "scripts") pod "96bacdfa-848f-4d9f-80eb-ba476fec752d" (UID: "96bacdfa-848f-4d9f-80eb-ba476fec752d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.547565 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96bacdfa-848f-4d9f-80eb-ba476fec752d-kube-api-access-nhlbf" (OuterVolumeSpecName: "kube-api-access-nhlbf") pod "96bacdfa-848f-4d9f-80eb-ba476fec752d" (UID: "96bacdfa-848f-4d9f-80eb-ba476fec752d"). InnerVolumeSpecName "kube-api-access-nhlbf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.549006 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "96bacdfa-848f-4d9f-80eb-ba476fec752d" (UID: "96bacdfa-848f-4d9f-80eb-ba476fec752d"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.562155 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96bacdfa-848f-4d9f-80eb-ba476fec752d" (UID: "96bacdfa-848f-4d9f-80eb-ba476fec752d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.583264 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-config-data" (OuterVolumeSpecName: "config-data") pod "96bacdfa-848f-4d9f-80eb-ba476fec752d" (UID: "96bacdfa-848f-4d9f-80eb-ba476fec752d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.594467 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "96bacdfa-848f-4d9f-80eb-ba476fec752d" (UID: "96bacdfa-848f-4d9f-80eb-ba476fec752d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.626189 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96bacdfa-848f-4d9f-80eb-ba476fec752d-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.626231 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.626305 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhlbf\" (UniqueName: \"kubernetes.io/projected/96bacdfa-848f-4d9f-80eb-ba476fec752d-kube-api-access-nhlbf\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.626321 4731 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.626362 4731 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.626377 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.626387 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96bacdfa-848f-4d9f-80eb-ba476fec752d-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.626398 4731 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/96bacdfa-848f-4d9f-80eb-ba476fec752d-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.654611 4731 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.728366 4731 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.822147 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a65199b3-dcd3-480f-9968-d6a0df2e8178" path="/var/lib/kubelet/pods/a65199b3-dcd3-480f-9968-d6a0df2e8178/volumes" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.951128 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.992953 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:14:06 crc kubenswrapper[4731]: I1125 15:14:06.992960 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"96bacdfa-848f-4d9f-80eb-ba476fec752d","Type":"ContainerDied","Data":"79ce2a1e3729fd9f4e19dd5fdfcba4e4d9943e9b7575ca57923af98e24b9cf0e"} Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.001513 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" event={"ID":"e032f375-15e6-4cbb-adaf-ec7187abecba","Type":"ContainerDied","Data":"c524aa9fc6ab18a8e62c39cb7dd84ab62d2eac179fdf0496a3dbaa8e10eb4d66"} Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.001553 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" Nov 25 15:14:07 crc kubenswrapper[4731]: E1125 15:14:07.006727 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-kplpn" podUID="b5547f3b-298a-4906-8c60-1f51c87c1059" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.018152 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.031288 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.032935 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-config\") pod \"e032f375-15e6-4cbb-adaf-ec7187abecba\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.033092 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-ovsdbserver-nb\") pod \"e032f375-15e6-4cbb-adaf-ec7187abecba\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.033127 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-dns-svc\") pod \"e032f375-15e6-4cbb-adaf-ec7187abecba\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.033155 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c96m\" (UniqueName: \"kubernetes.io/projected/e032f375-15e6-4cbb-adaf-ec7187abecba-kube-api-access-6c96m\") pod \"e032f375-15e6-4cbb-adaf-ec7187abecba\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.033241 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-ovsdbserver-sb\") pod \"e032f375-15e6-4cbb-adaf-ec7187abecba\" (UID: \"e032f375-15e6-4cbb-adaf-ec7187abecba\") " Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.036161 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:14:07 crc kubenswrapper[4731]: E1125 15:14:07.036602 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e032f375-15e6-4cbb-adaf-ec7187abecba" containerName="dnsmasq-dns" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.036620 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e032f375-15e6-4cbb-adaf-ec7187abecba" containerName="dnsmasq-dns" Nov 25 15:14:07 crc kubenswrapper[4731]: E1125 15:14:07.036634 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96bacdfa-848f-4d9f-80eb-ba476fec752d" containerName="glance-log" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.036642 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="96bacdfa-848f-4d9f-80eb-ba476fec752d" containerName="glance-log" Nov 25 15:14:07 crc kubenswrapper[4731]: E1125 15:14:07.036667 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e032f375-15e6-4cbb-adaf-ec7187abecba" containerName="init" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.036675 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e032f375-15e6-4cbb-adaf-ec7187abecba" containerName="init" Nov 25 15:14:07 crc kubenswrapper[4731]: E1125 15:14:07.036688 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96bacdfa-848f-4d9f-80eb-ba476fec752d" containerName="glance-httpd" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.036694 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="96bacdfa-848f-4d9f-80eb-ba476fec752d" containerName="glance-httpd" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.036868 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e032f375-15e6-4cbb-adaf-ec7187abecba" containerName="dnsmasq-dns" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.036886 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="96bacdfa-848f-4d9f-80eb-ba476fec752d" containerName="glance-log" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.036898 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="96bacdfa-848f-4d9f-80eb-ba476fec752d" containerName="glance-httpd" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.037919 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.038084 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e032f375-15e6-4cbb-adaf-ec7187abecba-kube-api-access-6c96m" (OuterVolumeSpecName: "kube-api-access-6c96m") pod "e032f375-15e6-4cbb-adaf-ec7187abecba" (UID: "e032f375-15e6-4cbb-adaf-ec7187abecba"). InnerVolumeSpecName "kube-api-access-6c96m". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.042196 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.043536 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 15:14:07 crc kubenswrapper[4731]: E1125 15:14:07.055613 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Nov 25 15:14:07 crc kubenswrapper[4731]: E1125 15:14:07.055747 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vpkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-c55qz_openstack(4bf94950-c565-46c7-95b1-9a62e2915979): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:14:07 crc kubenswrapper[4731]: E1125 15:14:07.057741 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-c55qz" podUID="4bf94950-c565-46c7-95b1-9a62e2915979" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.067272 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.096087 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e032f375-15e6-4cbb-adaf-ec7187abecba" (UID: "e032f375-15e6-4cbb-adaf-ec7187abecba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.103561 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-config" (OuterVolumeSpecName: "config") pod "e032f375-15e6-4cbb-adaf-ec7187abecba" (UID: "e032f375-15e6-4cbb-adaf-ec7187abecba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.119626 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e032f375-15e6-4cbb-adaf-ec7187abecba" (UID: "e032f375-15e6-4cbb-adaf-ec7187abecba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.124945 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e032f375-15e6-4cbb-adaf-ec7187abecba" (UID: "e032f375-15e6-4cbb-adaf-ec7187abecba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.135146 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.135395 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-logs\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.136335 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5wd4\" (UniqueName: \"kubernetes.io/projected/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-kube-api-access-r5wd4\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.136531 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.136598 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.136613 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.136632 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.136656 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.136858 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.136872 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.136881 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c96m\" (UniqueName: \"kubernetes.io/projected/e032f375-15e6-4cbb-adaf-ec7187abecba-kube-api-access-6c96m\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.136891 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.136899 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e032f375-15e6-4cbb-adaf-ec7187abecba-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.238834 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.238876 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-logs\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.238895 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5wd4\" (UniqueName: \"kubernetes.io/projected/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-kube-api-access-r5wd4\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.238948 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.238980 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.238995 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.239013 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.239032 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.239121 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.239437 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.239671 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-logs\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.244232 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.246557 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.254840 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.255497 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.258415 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5wd4\" (UniqueName: \"kubernetes.io/projected/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-kube-api-access-r5wd4\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.270843 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " pod="openstack/glance-default-external-api-0" Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.335329 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-2swq5"] Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.343462 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-2swq5"] Nov 25 15:14:07 crc kubenswrapper[4731]: I1125 15:14:07.409278 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:14:08 crc kubenswrapper[4731]: E1125 15:14:08.011561 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-c55qz" podUID="4bf94950-c565-46c7-95b1-9a62e2915979" Nov 25 15:14:08 crc kubenswrapper[4731]: I1125 15:14:08.193028 4731 scope.go:117] "RemoveContainer" containerID="2097f669c239666a85c68627aef11b456c23fe477f562ddf01539745b8bd919c" Nov 25 15:14:08 crc kubenswrapper[4731]: E1125 15:14:08.195649 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Nov 25 15:14:08 crc kubenswrapper[4731]: E1125 15:14:08.195847 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-w5wpl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-dv2l4_openstack(d828d0bf-a1e4-45a4-a840-4ddd1f84ed79): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:14:08 crc kubenswrapper[4731]: E1125 15:14:08.197143 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-dv2l4" podUID="d828d0bf-a1e4-45a4-a840-4ddd1f84ed79" Nov 25 15:14:08 crc kubenswrapper[4731]: I1125 15:14:08.371933 4731 scope.go:117] "RemoveContainer" containerID="8e115eb1c923d259cc0c45e0a3bbb8e2a696b950d948f8b201c60427a111af68" Nov 25 15:14:08 crc kubenswrapper[4731]: I1125 15:14:08.424506 4731 scope.go:117] "RemoveContainer" containerID="b3a857f70985c9584454e1aa3f5710b20ec84cb08860f98b69cb3c3dbc817a12" Nov 25 15:14:08 crc kubenswrapper[4731]: I1125 15:14:08.468061 4731 scope.go:117] "RemoveContainer" containerID="c9cf5881cde6c6e171e4b8db25d9e0eef4842ba28732bc4e791d863d6c6b0b55" Nov 25 15:14:08 crc kubenswrapper[4731]: I1125 15:14:08.489678 4731 scope.go:117] "RemoveContainer" containerID="5e7598b1563f22146a5642a2b91b284e0289259a66cd71559737621209651d0c" Nov 25 15:14:08 crc kubenswrapper[4731]: I1125 15:14:08.514269 4731 scope.go:117] "RemoveContainer" containerID="dad39291fcc899ee19ab7172511e80c997e9d62c967a1e381a6c890767c9fcf8" Nov 25 15:14:08 crc kubenswrapper[4731]: I1125 15:14:08.815813 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 15:14:08 crc kubenswrapper[4731]: I1125 15:14:08.827149 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96bacdfa-848f-4d9f-80eb-ba476fec752d" path="/var/lib/kubelet/pods/96bacdfa-848f-4d9f-80eb-ba476fec752d/volumes" Nov 25 15:14:08 crc kubenswrapper[4731]: I1125 15:14:08.828104 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e032f375-15e6-4cbb-adaf-ec7187abecba" path="/var/lib/kubelet/pods/e032f375-15e6-4cbb-adaf-ec7187abecba/volumes" Nov 25 15:14:08 crc kubenswrapper[4731]: I1125 15:14:08.828796 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fcglk"] Nov 25 15:14:08 crc kubenswrapper[4731]: I1125 15:14:08.922293 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:14:08 crc kubenswrapper[4731]: W1125 15:14:08.934875 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b46ae25_5a68_4181_a69e_c1620fd2c060.slice/crio-cfb8146855b3354e980ac05b095b1d05b065e7148a6c8e88b5a85b9f7f8378e6 WatchSource:0}: Error finding container cfb8146855b3354e980ac05b095b1d05b065e7148a6c8e88b5a85b9f7f8378e6: Status 404 returned error can't find the container with id cfb8146855b3354e980ac05b095b1d05b065e7148a6c8e88b5a85b9f7f8378e6 Nov 25 15:14:09 crc kubenswrapper[4731]: I1125 15:14:09.037554 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b46ae25-5a68-4181-a69e-c1620fd2c060","Type":"ContainerStarted","Data":"cfb8146855b3354e980ac05b095b1d05b065e7148a6c8e88b5a85b9f7f8378e6"} Nov 25 15:14:09 crc kubenswrapper[4731]: I1125 15:14:09.043687 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vs6g5" event={"ID":"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e","Type":"ContainerStarted","Data":"4f40de864f3fb09f6a4a92392fb48c7780105855c10b7061497df7791acd5e20"} Nov 25 15:14:09 crc kubenswrapper[4731]: I1125 15:14:09.062078 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-vs6g5" podStartSLOduration=5.933860794 podStartE2EDuration="35.062061686s" podCreationTimestamp="2025-11-25 15:13:34 +0000 UTC" firstStartedPulling="2025-11-25 15:13:37.299128012 +0000 UTC m=+1176.046262432" lastFinishedPulling="2025-11-25 15:14:06.427328894 +0000 UTC m=+1205.174463324" observedRunningTime="2025-11-25 15:14:09.060780514 +0000 UTC m=+1207.807914964" watchObservedRunningTime="2025-11-25 15:14:09.062061686 +0000 UTC m=+1207.809196136" Nov 25 15:14:09 crc kubenswrapper[4731]: I1125 15:14:09.062220 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e845d406-36a7-4202-b123-51efd30331b8","Type":"ContainerStarted","Data":"91e374ba90cfe72dbfe20b2345f7db1c7d85e2b12ca8ae0ca85fa2a4161cc54f"} Nov 25 15:14:09 crc kubenswrapper[4731]: I1125 15:14:09.069036 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"bf03c5dc783d5b39c33c89b81293bca28d3310abcd7786c9974eab3a74d91a80"} Nov 25 15:14:09 crc kubenswrapper[4731]: I1125 15:14:09.070526 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fcglk" event={"ID":"b0a9b890-4390-4543-ad98-6e8fd9c0d627","Type":"ContainerStarted","Data":"9274492e2af99844fc6495c2135c6f62b8dcb32943d8560e3f6905e98fee8ca9"} Nov 25 15:14:09 crc kubenswrapper[4731]: E1125 15:14:09.071969 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-dv2l4" podUID="d828d0bf-a1e4-45a4-a840-4ddd1f84ed79" Nov 25 15:14:09 crc kubenswrapper[4731]: I1125 15:14:09.825285 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:14:10 crc kubenswrapper[4731]: I1125 15:14:10.084042 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fcglk" event={"ID":"b0a9b890-4390-4543-ad98-6e8fd9c0d627","Type":"ContainerStarted","Data":"abee0b8c4825cfdcf9430aa40abb5d70a1777babd7e9536d2175ac8191bc10a4"} Nov 25 15:14:10 crc kubenswrapper[4731]: I1125 15:14:10.087613 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"51de4e1c-41f4-42ae-be32-ba6ad0c215f7","Type":"ContainerStarted","Data":"2f96d4cbb6358d6985d2edb591db60d5b6cfa0158e63aca3df9b2626fcfb73c8"} Nov 25 15:14:10 crc kubenswrapper[4731]: I1125 15:14:10.090850 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b46ae25-5a68-4181-a69e-c1620fd2c060","Type":"ContainerStarted","Data":"a71b6d53b34fdc67921eda79cab827a4e9d70caae43bb9281ab73bd2a115d7cc"} Nov 25 15:14:10 crc kubenswrapper[4731]: I1125 15:14:10.109917 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-fcglk" podStartSLOduration=13.10989788 podStartE2EDuration="13.10989788s" podCreationTimestamp="2025-11-25 15:13:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:10.109310436 +0000 UTC m=+1208.856444866" watchObservedRunningTime="2025-11-25 15:14:10.10989788 +0000 UTC m=+1208.857032320" Nov 25 15:14:11 crc kubenswrapper[4731]: I1125 15:14:11.100684 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"51de4e1c-41f4-42ae-be32-ba6ad0c215f7","Type":"ContainerStarted","Data":"4b82b717e6e11a23fd4ed5ede1cf6036a1585f3e11660179b59a0e77aceb8aac"} Nov 25 15:14:11 crc kubenswrapper[4731]: I1125 15:14:11.103825 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b46ae25-5a68-4181-a69e-c1620fd2c060","Type":"ContainerStarted","Data":"8452b379fa5bb1295fb1ef4d48747516b1ec0bbab83e1bf933c0805f068148cb"} Nov 25 15:14:11 crc kubenswrapper[4731]: I1125 15:14:11.107252 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e845d406-36a7-4202-b123-51efd30331b8","Type":"ContainerStarted","Data":"7535d9df3e861334c651cae6618d459d26924583c180c1831d349844df7bf375"} Nov 25 15:14:11 crc kubenswrapper[4731]: I1125 15:14:11.130958 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.130932934 podStartE2EDuration="5.130932934s" podCreationTimestamp="2025-11-25 15:14:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:11.12440057 +0000 UTC m=+1209.871535030" watchObservedRunningTime="2025-11-25 15:14:11.130932934 +0000 UTC m=+1209.878067374" Nov 25 15:14:11 crc kubenswrapper[4731]: I1125 15:14:11.256408 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-2swq5" podUID="e032f375-15e6-4cbb-adaf-ec7187abecba" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Nov 25 15:14:12 crc kubenswrapper[4731]: I1125 15:14:12.122059 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"51de4e1c-41f4-42ae-be32-ba6ad0c215f7","Type":"ContainerStarted","Data":"f50fff902b1f5357488d2fee4492fb5988c0da38a001ab51bf7da633a84af0d5"} Nov 25 15:14:12 crc kubenswrapper[4731]: I1125 15:14:12.148308 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.148289456 podStartE2EDuration="5.148289456s" podCreationTimestamp="2025-11-25 15:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:12.147710721 +0000 UTC m=+1210.894845151" watchObservedRunningTime="2025-11-25 15:14:12.148289456 +0000 UTC m=+1210.895423886" Nov 25 15:14:13 crc kubenswrapper[4731]: I1125 15:14:13.131133 4731 generic.go:334] "Generic (PLEG): container finished" podID="06c14d90-d1d0-43ce-b4a2-3b36dad72a8e" containerID="4f40de864f3fb09f6a4a92392fb48c7780105855c10b7061497df7791acd5e20" exitCode=0 Nov 25 15:14:13 crc kubenswrapper[4731]: I1125 15:14:13.131256 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vs6g5" event={"ID":"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e","Type":"ContainerDied","Data":"4f40de864f3fb09f6a4a92392fb48c7780105855c10b7061497df7791acd5e20"} Nov 25 15:14:14 crc kubenswrapper[4731]: I1125 15:14:14.159301 4731 generic.go:334] "Generic (PLEG): container finished" podID="b0a9b890-4390-4543-ad98-6e8fd9c0d627" containerID="abee0b8c4825cfdcf9430aa40abb5d70a1777babd7e9536d2175ac8191bc10a4" exitCode=0 Nov 25 15:14:14 crc kubenswrapper[4731]: I1125 15:14:14.159731 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fcglk" event={"ID":"b0a9b890-4390-4543-ad98-6e8fd9c0d627","Type":"ContainerDied","Data":"abee0b8c4825cfdcf9430aa40abb5d70a1777babd7e9536d2175ac8191bc10a4"} Nov 25 15:14:15 crc kubenswrapper[4731]: I1125 15:14:15.169705 4731 generic.go:334] "Generic (PLEG): container finished" podID="8836cab0-6606-45e6-adb7-2797a9e7d022" containerID="32e56b99379bffd1ca4c6d2daf598c435fb0e4739aacdef3d7f1868f24cbd2c0" exitCode=0 Nov 25 15:14:15 crc kubenswrapper[4731]: I1125 15:14:15.169794 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-n6xcb" event={"ID":"8836cab0-6606-45e6-adb7-2797a9e7d022","Type":"ContainerDied","Data":"32e56b99379bffd1ca4c6d2daf598c435fb0e4739aacdef3d7f1868f24cbd2c0"} Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.046548 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vs6g5" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.135989 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.192927 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fcglk" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.193506 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fcglk" event={"ID":"b0a9b890-4390-4543-ad98-6e8fd9c0d627","Type":"ContainerDied","Data":"9274492e2af99844fc6495c2135c6f62b8dcb32943d8560e3f6905e98fee8ca9"} Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.193538 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9274492e2af99844fc6495c2135c6f62b8dcb32943d8560e3f6905e98fee8ca9" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.196964 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-vs6g5" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.196969 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-vs6g5" event={"ID":"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e","Type":"ContainerDied","Data":"b6b2badae8c3f06a6fe3df3e9e3da6dfad73c0b751bb1f9bb93bf0a1e2332ae2"} Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.197064 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6b2badae8c3f06a6fe3df3e9e3da6dfad73c0b751bb1f9bb93bf0a1e2332ae2" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.197137 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-config-data\") pod \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.197364 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qcxdz\" (UniqueName: \"kubernetes.io/projected/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-kube-api-access-qcxdz\") pod \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.197391 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-scripts\") pod \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.197407 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-combined-ca-bundle\") pod \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.197458 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-logs\") pod \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\" (UID: \"06c14d90-d1d0-43ce-b4a2-3b36dad72a8e\") " Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.198098 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-logs" (OuterVolumeSpecName: "logs") pod "06c14d90-d1d0-43ce-b4a2-3b36dad72a8e" (UID: "06c14d90-d1d0-43ce-b4a2-3b36dad72a8e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.209176 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-kube-api-access-qcxdz" (OuterVolumeSpecName: "kube-api-access-qcxdz") pod "06c14d90-d1d0-43ce-b4a2-3b36dad72a8e" (UID: "06c14d90-d1d0-43ce-b4a2-3b36dad72a8e"). InnerVolumeSpecName "kube-api-access-qcxdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.211449 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-scripts" (OuterVolumeSpecName: "scripts") pod "06c14d90-d1d0-43ce-b4a2-3b36dad72a8e" (UID: "06c14d90-d1d0-43ce-b4a2-3b36dad72a8e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.239173 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "06c14d90-d1d0-43ce-b4a2-3b36dad72a8e" (UID: "06c14d90-d1d0-43ce-b4a2-3b36dad72a8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.248599 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-config-data" (OuterVolumeSpecName: "config-data") pod "06c14d90-d1d0-43ce-b4a2-3b36dad72a8e" (UID: "06c14d90-d1d0-43ce-b4a2-3b36dad72a8e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.291653 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-65d859dcf6-bblhv"] Nov 25 15:14:16 crc kubenswrapper[4731]: E1125 15:14:16.292132 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a9b890-4390-4543-ad98-6e8fd9c0d627" containerName="keystone-bootstrap" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.292148 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a9b890-4390-4543-ad98-6e8fd9c0d627" containerName="keystone-bootstrap" Nov 25 15:14:16 crc kubenswrapper[4731]: E1125 15:14:16.292161 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c14d90-d1d0-43ce-b4a2-3b36dad72a8e" containerName="placement-db-sync" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.292168 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c14d90-d1d0-43ce-b4a2-3b36dad72a8e" containerName="placement-db-sync" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.292328 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a9b890-4390-4543-ad98-6e8fd9c0d627" containerName="keystone-bootstrap" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.292346 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="06c14d90-d1d0-43ce-b4a2-3b36dad72a8e" containerName="placement-db-sync" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.292957 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.297400 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.297629 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.299128 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-combined-ca-bundle\") pod \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.299241 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-config-data\") pod \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.299276 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-credential-keys\") pod \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.299298 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-fernet-keys\") pod \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.309936 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2xhx\" (UniqueName: \"kubernetes.io/projected/b0a9b890-4390-4543-ad98-6e8fd9c0d627-kube-api-access-v2xhx\") pod \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.312483 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-65d859dcf6-bblhv"] Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.318242 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b0a9b890-4390-4543-ad98-6e8fd9c0d627" (UID: "b0a9b890-4390-4543-ad98-6e8fd9c0d627"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.320225 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b0a9b890-4390-4543-ad98-6e8fd9c0d627" (UID: "b0a9b890-4390-4543-ad98-6e8fd9c0d627"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.322884 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a9b890-4390-4543-ad98-6e8fd9c0d627-kube-api-access-v2xhx" (OuterVolumeSpecName: "kube-api-access-v2xhx") pod "b0a9b890-4390-4543-ad98-6e8fd9c0d627" (UID: "b0a9b890-4390-4543-ad98-6e8fd9c0d627"). InnerVolumeSpecName "kube-api-access-v2xhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.327977 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-scripts\") pod \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\" (UID: \"b0a9b890-4390-4543-ad98-6e8fd9c0d627\") " Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.328885 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-public-tls-certs\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.328924 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-credential-keys\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.328967 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-config-data\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.328984 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqvzs\" (UniqueName: \"kubernetes.io/projected/3f147555-b406-4784-8ee3-b25c2521ce34-kube-api-access-fqvzs\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.329006 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-scripts\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.329023 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-combined-ca-bundle\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.329072 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-internal-tls-certs\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.329099 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-fernet-keys\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.329230 4731 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-credential-keys\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.329245 4731 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.329257 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qcxdz\" (UniqueName: \"kubernetes.io/projected/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-kube-api-access-qcxdz\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.329267 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.329276 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.329285 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.329294 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.329303 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2xhx\" (UniqueName: \"kubernetes.io/projected/b0a9b890-4390-4543-ad98-6e8fd9c0d627-kube-api-access-v2xhx\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.334003 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-scripts" (OuterVolumeSpecName: "scripts") pod "b0a9b890-4390-4543-ad98-6e8fd9c0d627" (UID: "b0a9b890-4390-4543-ad98-6e8fd9c0d627"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.363263 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-config-data" (OuterVolumeSpecName: "config-data") pod "b0a9b890-4390-4543-ad98-6e8fd9c0d627" (UID: "b0a9b890-4390-4543-ad98-6e8fd9c0d627"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.379826 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0a9b890-4390-4543-ad98-6e8fd9c0d627" (UID: "b0a9b890-4390-4543-ad98-6e8fd9c0d627"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.408352 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.409209 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.431143 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-public-tls-certs\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.431206 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-credential-keys\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.431245 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-config-data\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.431270 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqvzs\" (UniqueName: \"kubernetes.io/projected/3f147555-b406-4784-8ee3-b25c2521ce34-kube-api-access-fqvzs\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.431292 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-scripts\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.431313 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-combined-ca-bundle\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.431354 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-internal-tls-certs\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.431394 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-fernet-keys\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.431464 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.431478 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.431492 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0a9b890-4390-4543-ad98-6e8fd9c0d627-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.436899 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-scripts\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.439702 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-config-data\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.439723 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-combined-ca-bundle\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.443651 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-internal-tls-certs\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.444679 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-public-tls-certs\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.446460 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-fernet-keys\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.449485 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3f147555-b406-4784-8ee3-b25c2521ce34-credential-keys\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.449997 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqvzs\" (UniqueName: \"kubernetes.io/projected/3f147555-b406-4784-8ee3-b25c2521ce34-kube-api-access-fqvzs\") pod \"keystone-65d859dcf6-bblhv\" (UID: \"3f147555-b406-4784-8ee3-b25c2521ce34\") " pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.465853 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.470414 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.501281 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-n6xcb" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.628877 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.633909 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5g6w\" (UniqueName: \"kubernetes.io/projected/8836cab0-6606-45e6-adb7-2797a9e7d022-kube-api-access-z5g6w\") pod \"8836cab0-6606-45e6-adb7-2797a9e7d022\" (UID: \"8836cab0-6606-45e6-adb7-2797a9e7d022\") " Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.633983 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8836cab0-6606-45e6-adb7-2797a9e7d022-combined-ca-bundle\") pod \"8836cab0-6606-45e6-adb7-2797a9e7d022\" (UID: \"8836cab0-6606-45e6-adb7-2797a9e7d022\") " Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.634019 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8836cab0-6606-45e6-adb7-2797a9e7d022-config\") pod \"8836cab0-6606-45e6-adb7-2797a9e7d022\" (UID: \"8836cab0-6606-45e6-adb7-2797a9e7d022\") " Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.637200 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8836cab0-6606-45e6-adb7-2797a9e7d022-kube-api-access-z5g6w" (OuterVolumeSpecName: "kube-api-access-z5g6w") pod "8836cab0-6606-45e6-adb7-2797a9e7d022" (UID: "8836cab0-6606-45e6-adb7-2797a9e7d022"). InnerVolumeSpecName "kube-api-access-z5g6w". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.658760 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8836cab0-6606-45e6-adb7-2797a9e7d022-config" (OuterVolumeSpecName: "config") pod "8836cab0-6606-45e6-adb7-2797a9e7d022" (UID: "8836cab0-6606-45e6-adb7-2797a9e7d022"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.660119 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8836cab0-6606-45e6-adb7-2797a9e7d022-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8836cab0-6606-45e6-adb7-2797a9e7d022" (UID: "8836cab0-6606-45e6-adb7-2797a9e7d022"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.741791 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5g6w\" (UniqueName: \"kubernetes.io/projected/8836cab0-6606-45e6-adb7-2797a9e7d022-kube-api-access-z5g6w\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.741835 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8836cab0-6606-45e6-adb7-2797a9e7d022-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:16 crc kubenswrapper[4731]: I1125 15:14:16.741849 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8836cab0-6606-45e6-adb7-2797a9e7d022-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.110153 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-65d859dcf6-bblhv"] Nov 25 15:14:17 crc kubenswrapper[4731]: W1125 15:14:17.125912 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f147555_b406_4784_8ee3_b25c2521ce34.slice/crio-c17f03fc9172f325d4ac96a1ba85d425d016a931d54f65d19c388e1c51cae6bc WatchSource:0}: Error finding container c17f03fc9172f325d4ac96a1ba85d425d016a931d54f65d19c388e1c51cae6bc: Status 404 returned error can't find the container with id c17f03fc9172f325d4ac96a1ba85d425d016a931d54f65d19c388e1c51cae6bc Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.171526 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ccd9b5c94-l8cl7"] Nov 25 15:14:17 crc kubenswrapper[4731]: E1125 15:14:17.173826 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8836cab0-6606-45e6-adb7-2797a9e7d022" containerName="neutron-db-sync" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.173854 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8836cab0-6606-45e6-adb7-2797a9e7d022" containerName="neutron-db-sync" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.174051 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="8836cab0-6606-45e6-adb7-2797a9e7d022" containerName="neutron-db-sync" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.175073 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.182350 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.182399 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.182639 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rshp2" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.183358 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.183449 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.205702 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ccd9b5c94-l8cl7"] Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.253746 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-n6xcb" event={"ID":"8836cab0-6606-45e6-adb7-2797a9e7d022","Type":"ContainerDied","Data":"f4bc6494d82b3d5d4a602ef116d898759ac2c3ad0b0fa3e6af5ce1e7570826ce"} Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.254014 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4bc6494d82b3d5d4a602ef116d898759ac2c3ad0b0fa3e6af5ce1e7570826ce" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.254139 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-n6xcb" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.255806 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-65d859dcf6-bblhv" event={"ID":"3f147555-b406-4784-8ee3-b25c2521ce34","Type":"ContainerStarted","Data":"c17f03fc9172f325d4ac96a1ba85d425d016a931d54f65d19c388e1c51cae6bc"} Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.257470 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e845d406-36a7-4202-b123-51efd30331b8","Type":"ContainerStarted","Data":"28e7527159e60c8f6aff109a5bdff5c250bf0f168eb13bacf9a6003409cd2c39"} Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.258568 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.258650 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.352259 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht66g\" (UniqueName: \"kubernetes.io/projected/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-kube-api-access-ht66g\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.352774 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-scripts\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.353022 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-combined-ca-bundle\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.353220 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-public-tls-certs\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.353327 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-logs\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.353461 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-internal-tls-certs\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.354097 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-config-data\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.409775 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.411033 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.455425 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-config-data\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.456420 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht66g\" (UniqueName: \"kubernetes.io/projected/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-kube-api-access-ht66g\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.457766 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-scripts\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.457990 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-combined-ca-bundle\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.458141 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-public-tls-certs\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.458506 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-logs\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.458607 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-internal-tls-certs\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.461298 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.462340 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-logs\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.468786 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-scripts\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.469711 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-config-data\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.473439 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-combined-ca-bundle\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.474187 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-internal-tls-certs\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.474807 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-public-tls-certs\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.483455 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht66g\" (UniqueName: \"kubernetes.io/projected/3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b-kube-api-access-ht66g\") pod \"placement-ccd9b5c94-l8cl7\" (UID: \"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b\") " pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.506915 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.579967 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gq9xr"] Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.581546 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.589093 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gq9xr"] Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.599591 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.664206 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-dns-svc\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.664271 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.664711 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.664745 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-config\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.664797 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.664840 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79vjl\" (UniqueName: \"kubernetes.io/projected/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-kube-api-access-79vjl\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.766842 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.767201 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79vjl\" (UniqueName: \"kubernetes.io/projected/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-kube-api-access-79vjl\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.767327 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-dns-svc\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.767348 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.767805 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.768113 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-dns-svc\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.768221 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.768249 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-config\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.768724 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.768875 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.769059 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-config\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.785284 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79vjl\" (UniqueName: \"kubernetes.io/projected/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-kube-api-access-79vjl\") pod \"dnsmasq-dns-6b7b667979-gq9xr\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.857442 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6cf794cbfb-89vbb"] Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.862005 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.866532 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.866671 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.867004 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-w9gd8" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.873394 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.885516 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cf794cbfb-89vbb"] Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.927466 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.977838 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-httpd-config\") pod \"neutron-6cf794cbfb-89vbb\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.977948 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-ovndb-tls-certs\") pod \"neutron-6cf794cbfb-89vbb\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.978045 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcdkm\" (UniqueName: \"kubernetes.io/projected/87214202-a4d1-48c1-af50-b9eed07cab9d-kube-api-access-vcdkm\") pod \"neutron-6cf794cbfb-89vbb\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.978113 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-combined-ca-bundle\") pod \"neutron-6cf794cbfb-89vbb\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:17 crc kubenswrapper[4731]: I1125 15:14:17.978282 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-config\") pod \"neutron-6cf794cbfb-89vbb\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.082076 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-combined-ca-bundle\") pod \"neutron-6cf794cbfb-89vbb\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.082312 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-config\") pod \"neutron-6cf794cbfb-89vbb\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.082359 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-httpd-config\") pod \"neutron-6cf794cbfb-89vbb\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.082392 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-ovndb-tls-certs\") pod \"neutron-6cf794cbfb-89vbb\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.082437 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcdkm\" (UniqueName: \"kubernetes.io/projected/87214202-a4d1-48c1-af50-b9eed07cab9d-kube-api-access-vcdkm\") pod \"neutron-6cf794cbfb-89vbb\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.087460 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-combined-ca-bundle\") pod \"neutron-6cf794cbfb-89vbb\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.090233 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-config\") pod \"neutron-6cf794cbfb-89vbb\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.090976 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-httpd-config\") pod \"neutron-6cf794cbfb-89vbb\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.098530 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-ovndb-tls-certs\") pod \"neutron-6cf794cbfb-89vbb\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.102210 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcdkm\" (UniqueName: \"kubernetes.io/projected/87214202-a4d1-48c1-af50-b9eed07cab9d-kube-api-access-vcdkm\") pod \"neutron-6cf794cbfb-89vbb\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.191057 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.193294 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ccd9b5c94-l8cl7"] Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.278371 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-65d859dcf6-bblhv" event={"ID":"3f147555-b406-4784-8ee3-b25c2521ce34","Type":"ContainerStarted","Data":"15817c2138e3c4fd54a09e6fe67aa4e2de80bdece15e008ee66136770c0f1d4e"} Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.280484 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.286255 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ccd9b5c94-l8cl7" event={"ID":"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b","Type":"ContainerStarted","Data":"a035e2369ef6eb8b98942bf9ca25fffe3f99b2d79e181816e7d426dae3074fde"} Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.286319 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.287035 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.315108 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-65d859dcf6-bblhv" podStartSLOduration=2.315085614 podStartE2EDuration="2.315085614s" podCreationTimestamp="2025-11-25 15:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:18.304140019 +0000 UTC m=+1217.051274449" watchObservedRunningTime="2025-11-25 15:14:18.315085614 +0000 UTC m=+1217.062220044" Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.622046 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gq9xr"] Nov 25 15:14:18 crc kubenswrapper[4731]: I1125 15:14:18.830085 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6cf794cbfb-89vbb"] Nov 25 15:14:18 crc kubenswrapper[4731]: W1125 15:14:18.840466 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87214202_a4d1_48c1_af50_b9eed07cab9d.slice/crio-52c5964791e66bc766b78fa921e30809b0f55f910f482e84c9f11e7c76b06f6c WatchSource:0}: Error finding container 52c5964791e66bc766b78fa921e30809b0f55f910f482e84c9f11e7c76b06f6c: Status 404 returned error can't find the container with id 52c5964791e66bc766b78fa921e30809b0f55f910f482e84c9f11e7c76b06f6c Nov 25 15:14:19 crc kubenswrapper[4731]: I1125 15:14:19.301379 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" event={"ID":"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc","Type":"ContainerStarted","Data":"7600a7f68253d63dbf8cb0033e7a9c75321bb5c638b6389a50a1e1a513afe997"} Nov 25 15:14:19 crc kubenswrapper[4731]: I1125 15:14:19.301676 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" event={"ID":"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc","Type":"ContainerStarted","Data":"63d0a5c079a9ea3c11cb4946bf2a770d370dd183e0b2364a21ccfd8c82d1623a"} Nov 25 15:14:19 crc kubenswrapper[4731]: I1125 15:14:19.303660 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ccd9b5c94-l8cl7" event={"ID":"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b","Type":"ContainerStarted","Data":"ec0ad2d8b3e5794eae76ff3c649db54d3d09ff2ba28e82c509e1d6a48316c3f3"} Nov 25 15:14:19 crc kubenswrapper[4731]: I1125 15:14:19.311948 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf794cbfb-89vbb" event={"ID":"87214202-a4d1-48c1-af50-b9eed07cab9d","Type":"ContainerStarted","Data":"85b1ecf5ffe985766fb98c198382bdb0e5d3d5782175a6ffd6c2811214460267"} Nov 25 15:14:19 crc kubenswrapper[4731]: I1125 15:14:19.312007 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf794cbfb-89vbb" event={"ID":"87214202-a4d1-48c1-af50-b9eed07cab9d","Type":"ContainerStarted","Data":"52c5964791e66bc766b78fa921e30809b0f55f910f482e84c9f11e7c76b06f6c"} Nov 25 15:14:19 crc kubenswrapper[4731]: I1125 15:14:19.312184 4731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 15:14:19 crc kubenswrapper[4731]: I1125 15:14:19.312211 4731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 15:14:19 crc kubenswrapper[4731]: I1125 15:14:19.570483 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 15:14:19 crc kubenswrapper[4731]: I1125 15:14:19.573193 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.315030 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ccd9b5c94-l8cl7" event={"ID":"3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b","Type":"ContainerStarted","Data":"cc3634750ad81a57ba5dbbf3ea758ad189ee661efddf99c27d39d03be74a7fa6"} Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.315404 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.315431 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.317166 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf794cbfb-89vbb" event={"ID":"87214202-a4d1-48c1-af50-b9eed07cab9d","Type":"ContainerStarted","Data":"dc1b89295e73ed9b1d568c3af548142380b7b50601c3855a739bf64056d10206"} Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.317763 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.324136 4731 generic.go:334] "Generic (PLEG): container finished" podID="b1a6ad1b-747e-4a9e-a567-ff881fa18cfc" containerID="7600a7f68253d63dbf8cb0033e7a9c75321bb5c638b6389a50a1e1a513afe997" exitCode=0 Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.324239 4731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.324250 4731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.324994 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" event={"ID":"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc","Type":"ContainerDied","Data":"7600a7f68253d63dbf8cb0033e7a9c75321bb5c638b6389a50a1e1a513afe997"} Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.358052 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-ccd9b5c94-l8cl7" podStartSLOduration=3.357994642 podStartE2EDuration="3.357994642s" podCreationTimestamp="2025-11-25 15:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:20.342444432 +0000 UTC m=+1219.089578862" watchObservedRunningTime="2025-11-25 15:14:20.357994642 +0000 UTC m=+1219.105129072" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.409276 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-545595f45c-m798f"] Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.411055 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.412047 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6cf794cbfb-89vbb" podStartSLOduration=3.412030866 podStartE2EDuration="3.412030866s" podCreationTimestamp="2025-11-25 15:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:20.387338558 +0000 UTC m=+1219.134472988" watchObservedRunningTime="2025-11-25 15:14:20.412030866 +0000 UTC m=+1219.159165296" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.420269 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.420580 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.430030 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-545595f45c-m798f"] Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.542899 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-combined-ca-bundle\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.543257 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhnql\" (UniqueName: \"kubernetes.io/projected/e70199f9-3fcf-4b0f-98c5-c45faf62c604-kube-api-access-jhnql\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.543291 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-internal-tls-certs\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.543319 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-config\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.543345 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-httpd-config\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.543363 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-public-tls-certs\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.543419 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-ovndb-tls-certs\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.645545 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-config\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.645607 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-httpd-config\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.645637 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-public-tls-certs\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.645735 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-ovndb-tls-certs\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.645809 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-combined-ca-bundle\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.645939 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhnql\" (UniqueName: \"kubernetes.io/projected/e70199f9-3fcf-4b0f-98c5-c45faf62c604-kube-api-access-jhnql\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.645990 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-internal-tls-certs\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.651413 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-public-tls-certs\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.651855 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-config\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.652125 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-internal-tls-certs\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.653730 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-ovndb-tls-certs\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.655675 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-httpd-config\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.664671 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e70199f9-3fcf-4b0f-98c5-c45faf62c604-combined-ca-bundle\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.668166 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhnql\" (UniqueName: \"kubernetes.io/projected/e70199f9-3fcf-4b0f-98c5-c45faf62c604-kube-api-access-jhnql\") pod \"neutron-545595f45c-m798f\" (UID: \"e70199f9-3fcf-4b0f-98c5-c45faf62c604\") " pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.703379 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.706017 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 15:14:20 crc kubenswrapper[4731]: I1125 15:14:20.787391 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:21 crc kubenswrapper[4731]: I1125 15:14:21.333849 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-c55qz" event={"ID":"4bf94950-c565-46c7-95b1-9a62e2915979","Type":"ContainerStarted","Data":"e971e1d970148de4c04f359b1dcd11e5d8557b952b092fec482781081388f214"} Nov 25 15:14:21 crc kubenswrapper[4731]: I1125 15:14:21.336116 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" event={"ID":"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc","Type":"ContainerStarted","Data":"1016014fa83f77caa25d75cae216d0fd13a5f3c882b8edd5beec87f0ef88fc76"} Nov 25 15:14:21 crc kubenswrapper[4731]: I1125 15:14:21.376696 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-c55qz" podStartSLOduration=3.783942413 podStartE2EDuration="47.376666875s" podCreationTimestamp="2025-11-25 15:13:34 +0000 UTC" firstStartedPulling="2025-11-25 15:13:37.021615406 +0000 UTC m=+1175.768749836" lastFinishedPulling="2025-11-25 15:14:20.614339868 +0000 UTC m=+1219.361474298" observedRunningTime="2025-11-25 15:14:21.371787073 +0000 UTC m=+1220.118921523" watchObservedRunningTime="2025-11-25 15:14:21.376666875 +0000 UTC m=+1220.123801315" Nov 25 15:14:21 crc kubenswrapper[4731]: I1125 15:14:21.505248 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-545595f45c-m798f"] Nov 25 15:14:22 crc kubenswrapper[4731]: I1125 15:14:22.354437 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-545595f45c-m798f" event={"ID":"e70199f9-3fcf-4b0f-98c5-c45faf62c604","Type":"ContainerStarted","Data":"4e887cede401f47a99e61d52f0830d097280ccbcb1c8d3548efe43c947f2eb95"} Nov 25 15:14:22 crc kubenswrapper[4731]: I1125 15:14:22.355026 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:22 crc kubenswrapper[4731]: I1125 15:14:22.355046 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-545595f45c-m798f" event={"ID":"e70199f9-3fcf-4b0f-98c5-c45faf62c604","Type":"ContainerStarted","Data":"cda8b1a691ae3c7f57e6ea647c08ba245d0e10f3d3ea4814465b9aa790e4ef23"} Nov 25 15:14:22 crc kubenswrapper[4731]: I1125 15:14:22.355060 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-545595f45c-m798f" event={"ID":"e70199f9-3fcf-4b0f-98c5-c45faf62c604","Type":"ContainerStarted","Data":"6d3f42a1c22ccc332a500e6addcc236c5fe121ca405116d6798044cfd800db6d"} Nov 25 15:14:22 crc kubenswrapper[4731]: I1125 15:14:22.356940 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kplpn" event={"ID":"b5547f3b-298a-4906-8c60-1f51c87c1059","Type":"ContainerStarted","Data":"8164a5f4fe6eccb52e84c5e0589667efd541b0f6efc7c6d1b52df8790ab1f3aa"} Nov 25 15:14:22 crc kubenswrapper[4731]: I1125 15:14:22.383472 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-545595f45c-m798f" podStartSLOduration=2.383450512 podStartE2EDuration="2.383450512s" podCreationTimestamp="2025-11-25 15:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:22.377046671 +0000 UTC m=+1221.124181101" watchObservedRunningTime="2025-11-25 15:14:22.383450512 +0000 UTC m=+1221.130584952" Nov 25 15:14:22 crc kubenswrapper[4731]: I1125 15:14:22.397877 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" podStartSLOduration=5.397857383 podStartE2EDuration="5.397857383s" podCreationTimestamp="2025-11-25 15:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:22.396965141 +0000 UTC m=+1221.144099561" watchObservedRunningTime="2025-11-25 15:14:22.397857383 +0000 UTC m=+1221.144991813" Nov 25 15:14:22 crc kubenswrapper[4731]: I1125 15:14:22.419896 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-kplpn" podStartSLOduration=3.9439056949999998 podStartE2EDuration="48.419876845s" podCreationTimestamp="2025-11-25 15:13:34 +0000 UTC" firstStartedPulling="2025-11-25 15:13:37.247214431 +0000 UTC m=+1175.994348861" lastFinishedPulling="2025-11-25 15:14:21.723185581 +0000 UTC m=+1220.470320011" observedRunningTime="2025-11-25 15:14:22.40968561 +0000 UTC m=+1221.156820060" watchObservedRunningTime="2025-11-25 15:14:22.419876845 +0000 UTC m=+1221.167011285" Nov 25 15:14:22 crc kubenswrapper[4731]: I1125 15:14:22.928578 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:27 crc kubenswrapper[4731]: I1125 15:14:27.929953 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.034279 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p49g6"] Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.034545 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" podUID="0f353937-4c45-4332-abdb-a187c7c7101d" containerName="dnsmasq-dns" containerID="cri-o://386abf86a2cdbca038f2e9784944fa7f034d4409435c0f386a902bb88e7ee3d6" gracePeriod=10 Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.440681 4731 generic.go:334] "Generic (PLEG): container finished" podID="0f353937-4c45-4332-abdb-a187c7c7101d" containerID="386abf86a2cdbca038f2e9784944fa7f034d4409435c0f386a902bb88e7ee3d6" exitCode=0 Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.441042 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" event={"ID":"0f353937-4c45-4332-abdb-a187c7c7101d","Type":"ContainerDied","Data":"386abf86a2cdbca038f2e9784944fa7f034d4409435c0f386a902bb88e7ee3d6"} Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.454197 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e845d406-36a7-4202-b123-51efd30331b8","Type":"ContainerStarted","Data":"941c8871693e5478a166ef3fef2c1d414cf16e3c3d98dfa5da2efba07c7c9e49"} Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.454366 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e845d406-36a7-4202-b123-51efd30331b8" containerName="ceilometer-central-agent" containerID="cri-o://91e374ba90cfe72dbfe20b2345f7db1c7d85e2b12ca8ae0ca85fa2a4161cc54f" gracePeriod=30 Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.454492 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e845d406-36a7-4202-b123-51efd30331b8" containerName="proxy-httpd" containerID="cri-o://941c8871693e5478a166ef3fef2c1d414cf16e3c3d98dfa5da2efba07c7c9e49" gracePeriod=30 Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.454511 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.454532 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e845d406-36a7-4202-b123-51efd30331b8" containerName="sg-core" containerID="cri-o://28e7527159e60c8f6aff109a5bdff5c250bf0f168eb13bacf9a6003409cd2c39" gracePeriod=30 Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.454568 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e845d406-36a7-4202-b123-51efd30331b8" containerName="ceilometer-notification-agent" containerID="cri-o://7535d9df3e861334c651cae6618d459d26924583c180c1831d349844df7bf375" gracePeriod=30 Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.459236 4731 generic.go:334] "Generic (PLEG): container finished" podID="b5547f3b-298a-4906-8c60-1f51c87c1059" containerID="8164a5f4fe6eccb52e84c5e0589667efd541b0f6efc7c6d1b52df8790ab1f3aa" exitCode=0 Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.459285 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kplpn" event={"ID":"b5547f3b-298a-4906-8c60-1f51c87c1059","Type":"ContainerDied","Data":"8164a5f4fe6eccb52e84c5e0589667efd541b0f6efc7c6d1b52df8790ab1f3aa"} Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.484571 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.005178508 podStartE2EDuration="54.484550222s" podCreationTimestamp="2025-11-25 15:13:34 +0000 UTC" firstStartedPulling="2025-11-25 15:13:37.326420146 +0000 UTC m=+1176.073554576" lastFinishedPulling="2025-11-25 15:14:27.80579186 +0000 UTC m=+1226.552926290" observedRunningTime="2025-11-25 15:14:28.477152897 +0000 UTC m=+1227.224287327" watchObservedRunningTime="2025-11-25 15:14:28.484550222 +0000 UTC m=+1227.231684652" Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.542319 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.615227 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-dns-svc\") pod \"0f353937-4c45-4332-abdb-a187c7c7101d\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.615288 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-dns-swift-storage-0\") pod \"0f353937-4c45-4332-abdb-a187c7c7101d\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.615332 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-ovsdbserver-sb\") pod \"0f353937-4c45-4332-abdb-a187c7c7101d\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.615358 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-ovsdbserver-nb\") pod \"0f353937-4c45-4332-abdb-a187c7c7101d\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.615426 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-config\") pod \"0f353937-4c45-4332-abdb-a187c7c7101d\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.615457 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66k6l\" (UniqueName: \"kubernetes.io/projected/0f353937-4c45-4332-abdb-a187c7c7101d-kube-api-access-66k6l\") pod \"0f353937-4c45-4332-abdb-a187c7c7101d\" (UID: \"0f353937-4c45-4332-abdb-a187c7c7101d\") " Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.622256 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f353937-4c45-4332-abdb-a187c7c7101d-kube-api-access-66k6l" (OuterVolumeSpecName: "kube-api-access-66k6l") pod "0f353937-4c45-4332-abdb-a187c7c7101d" (UID: "0f353937-4c45-4332-abdb-a187c7c7101d"). InnerVolumeSpecName "kube-api-access-66k6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.671268 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0f353937-4c45-4332-abdb-a187c7c7101d" (UID: "0f353937-4c45-4332-abdb-a187c7c7101d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.671292 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-config" (OuterVolumeSpecName: "config") pod "0f353937-4c45-4332-abdb-a187c7c7101d" (UID: "0f353937-4c45-4332-abdb-a187c7c7101d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.685064 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0f353937-4c45-4332-abdb-a187c7c7101d" (UID: "0f353937-4c45-4332-abdb-a187c7c7101d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.692060 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0f353937-4c45-4332-abdb-a187c7c7101d" (UID: "0f353937-4c45-4332-abdb-a187c7c7101d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.699524 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0f353937-4c45-4332-abdb-a187c7c7101d" (UID: "0f353937-4c45-4332-abdb-a187c7c7101d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.717513 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.717545 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.717556 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66k6l\" (UniqueName: \"kubernetes.io/projected/0f353937-4c45-4332-abdb-a187c7c7101d-kube-api-access-66k6l\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.717595 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.717604 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:28 crc kubenswrapper[4731]: I1125 15:14:28.717611 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0f353937-4c45-4332-abdb-a187c7c7101d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.473208 4731 generic.go:334] "Generic (PLEG): container finished" podID="4bf94950-c565-46c7-95b1-9a62e2915979" containerID="e971e1d970148de4c04f359b1dcd11e5d8557b952b092fec482781081388f214" exitCode=0 Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.473337 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-c55qz" event={"ID":"4bf94950-c565-46c7-95b1-9a62e2915979","Type":"ContainerDied","Data":"e971e1d970148de4c04f359b1dcd11e5d8557b952b092fec482781081388f214"} Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.475496 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dv2l4" event={"ID":"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79","Type":"ContainerStarted","Data":"7e4aadd6127f2215a50ad8ce519d0d9de9adad382ad612c5cc67fb349208655d"} Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.477931 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" event={"ID":"0f353937-4c45-4332-abdb-a187c7c7101d","Type":"ContainerDied","Data":"3c689ae7bc3fd336f0040bf15e0c9c11aa860653663598f545395f33bbf42bc2"} Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.477975 4731 scope.go:117] "RemoveContainer" containerID="386abf86a2cdbca038f2e9784944fa7f034d4409435c0f386a902bb88e7ee3d6" Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.477991 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-p49g6" Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.482043 4731 generic.go:334] "Generic (PLEG): container finished" podID="e845d406-36a7-4202-b123-51efd30331b8" containerID="941c8871693e5478a166ef3fef2c1d414cf16e3c3d98dfa5da2efba07c7c9e49" exitCode=0 Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.482155 4731 generic.go:334] "Generic (PLEG): container finished" podID="e845d406-36a7-4202-b123-51efd30331b8" containerID="28e7527159e60c8f6aff109a5bdff5c250bf0f168eb13bacf9a6003409cd2c39" exitCode=2 Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.482175 4731 generic.go:334] "Generic (PLEG): container finished" podID="e845d406-36a7-4202-b123-51efd30331b8" containerID="91e374ba90cfe72dbfe20b2345f7db1c7d85e2b12ca8ae0ca85fa2a4161cc54f" exitCode=0 Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.482084 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e845d406-36a7-4202-b123-51efd30331b8","Type":"ContainerDied","Data":"941c8871693e5478a166ef3fef2c1d414cf16e3c3d98dfa5da2efba07c7c9e49"} Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.482247 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e845d406-36a7-4202-b123-51efd30331b8","Type":"ContainerDied","Data":"28e7527159e60c8f6aff109a5bdff5c250bf0f168eb13bacf9a6003409cd2c39"} Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.482266 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e845d406-36a7-4202-b123-51efd30331b8","Type":"ContainerDied","Data":"91e374ba90cfe72dbfe20b2345f7db1c7d85e2b12ca8ae0ca85fa2a4161cc54f"} Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.523498 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-dv2l4" podStartSLOduration=4.739458395 podStartE2EDuration="55.523481685s" podCreationTimestamp="2025-11-25 15:13:34 +0000 UTC" firstStartedPulling="2025-11-25 15:13:36.995409119 +0000 UTC m=+1175.742543549" lastFinishedPulling="2025-11-25 15:14:27.779432409 +0000 UTC m=+1226.526566839" observedRunningTime="2025-11-25 15:14:29.519358321 +0000 UTC m=+1228.266492751" watchObservedRunningTime="2025-11-25 15:14:29.523481685 +0000 UTC m=+1228.270616115" Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.533595 4731 scope.go:117] "RemoveContainer" containerID="532b6144532f12922a4e312b89b6e271e61bb852b554356ae482c133ae817198" Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.540096 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p49g6"] Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.575504 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-p49g6"] Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.835666 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kplpn" Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.980082 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5547f3b-298a-4906-8c60-1f51c87c1059-combined-ca-bundle\") pod \"b5547f3b-298a-4906-8c60-1f51c87c1059\" (UID: \"b5547f3b-298a-4906-8c60-1f51c87c1059\") " Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.980126 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfgsf\" (UniqueName: \"kubernetes.io/projected/b5547f3b-298a-4906-8c60-1f51c87c1059-kube-api-access-wfgsf\") pod \"b5547f3b-298a-4906-8c60-1f51c87c1059\" (UID: \"b5547f3b-298a-4906-8c60-1f51c87c1059\") " Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.980197 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b5547f3b-298a-4906-8c60-1f51c87c1059-db-sync-config-data\") pod \"b5547f3b-298a-4906-8c60-1f51c87c1059\" (UID: \"b5547f3b-298a-4906-8c60-1f51c87c1059\") " Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.990101 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5547f3b-298a-4906-8c60-1f51c87c1059-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "b5547f3b-298a-4906-8c60-1f51c87c1059" (UID: "b5547f3b-298a-4906-8c60-1f51c87c1059"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:29 crc kubenswrapper[4731]: I1125 15:14:29.990196 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5547f3b-298a-4906-8c60-1f51c87c1059-kube-api-access-wfgsf" (OuterVolumeSpecName: "kube-api-access-wfgsf") pod "b5547f3b-298a-4906-8c60-1f51c87c1059" (UID: "b5547f3b-298a-4906-8c60-1f51c87c1059"). InnerVolumeSpecName "kube-api-access-wfgsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.005551 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5547f3b-298a-4906-8c60-1f51c87c1059-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b5547f3b-298a-4906-8c60-1f51c87c1059" (UID: "b5547f3b-298a-4906-8c60-1f51c87c1059"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.082039 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b5547f3b-298a-4906-8c60-1f51c87c1059-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.082078 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfgsf\" (UniqueName: \"kubernetes.io/projected/b5547f3b-298a-4906-8c60-1f51c87c1059-kube-api-access-wfgsf\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.082093 4731 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/b5547f3b-298a-4906-8c60-1f51c87c1059-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.493152 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-kplpn" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.493151 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-kplpn" event={"ID":"b5547f3b-298a-4906-8c60-1f51c87c1059","Type":"ContainerDied","Data":"9e07e34b88b0c33a4a85314f836ff489427e11081724293b5db71b6783089fd4"} Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.493276 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e07e34b88b0c33a4a85314f836ff489427e11081724293b5db71b6783089fd4" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.746880 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-7dd99f9879-qskfg"] Nov 25 15:14:30 crc kubenswrapper[4731]: E1125 15:14:30.747510 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5547f3b-298a-4906-8c60-1f51c87c1059" containerName="barbican-db-sync" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.747526 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5547f3b-298a-4906-8c60-1f51c87c1059" containerName="barbican-db-sync" Nov 25 15:14:30 crc kubenswrapper[4731]: E1125 15:14:30.747561 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f353937-4c45-4332-abdb-a187c7c7101d" containerName="dnsmasq-dns" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.747567 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f353937-4c45-4332-abdb-a187c7c7101d" containerName="dnsmasq-dns" Nov 25 15:14:30 crc kubenswrapper[4731]: E1125 15:14:30.747585 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f353937-4c45-4332-abdb-a187c7c7101d" containerName="init" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.747592 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f353937-4c45-4332-abdb-a187c7c7101d" containerName="init" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.747745 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f353937-4c45-4332-abdb-a187c7c7101d" containerName="dnsmasq-dns" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.747767 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5547f3b-298a-4906-8c60-1f51c87c1059" containerName="barbican-db-sync" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.748709 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.755424 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7dd99f9879-qskfg"] Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.768102 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.771092 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.776132 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-9ffpv" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.794528 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5c7d6d4554-f24zm"] Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.796078 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.801118 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.832598 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f353937-4c45-4332-abdb-a187c7c7101d" path="/var/lib/kubelet/pods/0f353937-4c45-4332-abdb-a187c7c7101d/volumes" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.833236 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c7d6d4554-f24zm"] Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.894140 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ba4d2e-6323-48b0-b781-313cab464832-logs\") pod \"barbican-keystone-listener-5c7d6d4554-f24zm\" (UID: \"b8ba4d2e-6323-48b0-b781-313cab464832\") " pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.894190 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ba4d2e-6323-48b0-b781-313cab464832-combined-ca-bundle\") pod \"barbican-keystone-listener-5c7d6d4554-f24zm\" (UID: \"b8ba4d2e-6323-48b0-b781-313cab464832\") " pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.894221 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ba4d2e-6323-48b0-b781-313cab464832-config-data\") pod \"barbican-keystone-listener-5c7d6d4554-f24zm\" (UID: \"b8ba4d2e-6323-48b0-b781-313cab464832\") " pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.894244 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4493c299-fefe-4934-b370-f2270a1b6226-config-data-custom\") pod \"barbican-worker-7dd99f9879-qskfg\" (UID: \"4493c299-fefe-4934-b370-f2270a1b6226\") " pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.894264 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4493c299-fefe-4934-b370-f2270a1b6226-combined-ca-bundle\") pod \"barbican-worker-7dd99f9879-qskfg\" (UID: \"4493c299-fefe-4934-b370-f2270a1b6226\") " pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.894281 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8ba4d2e-6323-48b0-b781-313cab464832-config-data-custom\") pod \"barbican-keystone-listener-5c7d6d4554-f24zm\" (UID: \"b8ba4d2e-6323-48b0-b781-313cab464832\") " pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.894341 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4493c299-fefe-4934-b370-f2270a1b6226-logs\") pod \"barbican-worker-7dd99f9879-qskfg\" (UID: \"4493c299-fefe-4934-b370-f2270a1b6226\") " pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.894379 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdrl9\" (UniqueName: \"kubernetes.io/projected/b8ba4d2e-6323-48b0-b781-313cab464832-kube-api-access-pdrl9\") pod \"barbican-keystone-listener-5c7d6d4554-f24zm\" (UID: \"b8ba4d2e-6323-48b0-b781-313cab464832\") " pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.894403 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzvd9\" (UniqueName: \"kubernetes.io/projected/4493c299-fefe-4934-b370-f2270a1b6226-kube-api-access-zzvd9\") pod \"barbican-worker-7dd99f9879-qskfg\" (UID: \"4493c299-fefe-4934-b370-f2270a1b6226\") " pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.894436 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4493c299-fefe-4934-b370-f2270a1b6226-config-data\") pod \"barbican-worker-7dd99f9879-qskfg\" (UID: \"4493c299-fefe-4934-b370-f2270a1b6226\") " pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.933424 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-c55qz" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.951355 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-f9m22"] Nov 25 15:14:30 crc kubenswrapper[4731]: E1125 15:14:30.951906 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf94950-c565-46c7-95b1-9a62e2915979" containerName="heat-db-sync" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.951993 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf94950-c565-46c7-95b1-9a62e2915979" containerName="heat-db-sync" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.952212 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf94950-c565-46c7-95b1-9a62e2915979" containerName="heat-db-sync" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.960468 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.990065 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-f9m22"] Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.995909 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ba4d2e-6323-48b0-b781-313cab464832-logs\") pod \"barbican-keystone-listener-5c7d6d4554-f24zm\" (UID: \"b8ba4d2e-6323-48b0-b781-313cab464832\") " pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.995979 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ba4d2e-6323-48b0-b781-313cab464832-combined-ca-bundle\") pod \"barbican-keystone-listener-5c7d6d4554-f24zm\" (UID: \"b8ba4d2e-6323-48b0-b781-313cab464832\") " pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.996024 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ba4d2e-6323-48b0-b781-313cab464832-config-data\") pod \"barbican-keystone-listener-5c7d6d4554-f24zm\" (UID: \"b8ba4d2e-6323-48b0-b781-313cab464832\") " pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.996056 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4493c299-fefe-4934-b370-f2270a1b6226-config-data-custom\") pod \"barbican-worker-7dd99f9879-qskfg\" (UID: \"4493c299-fefe-4934-b370-f2270a1b6226\") " pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.996083 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4493c299-fefe-4934-b370-f2270a1b6226-combined-ca-bundle\") pod \"barbican-worker-7dd99f9879-qskfg\" (UID: \"4493c299-fefe-4934-b370-f2270a1b6226\") " pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.996105 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8ba4d2e-6323-48b0-b781-313cab464832-config-data-custom\") pod \"barbican-keystone-listener-5c7d6d4554-f24zm\" (UID: \"b8ba4d2e-6323-48b0-b781-313cab464832\") " pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.996195 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4493c299-fefe-4934-b370-f2270a1b6226-logs\") pod \"barbican-worker-7dd99f9879-qskfg\" (UID: \"4493c299-fefe-4934-b370-f2270a1b6226\") " pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.996260 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdrl9\" (UniqueName: \"kubernetes.io/projected/b8ba4d2e-6323-48b0-b781-313cab464832-kube-api-access-pdrl9\") pod \"barbican-keystone-listener-5c7d6d4554-f24zm\" (UID: \"b8ba4d2e-6323-48b0-b781-313cab464832\") " pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.996293 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzvd9\" (UniqueName: \"kubernetes.io/projected/4493c299-fefe-4934-b370-f2270a1b6226-kube-api-access-zzvd9\") pod \"barbican-worker-7dd99f9879-qskfg\" (UID: \"4493c299-fefe-4934-b370-f2270a1b6226\") " pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.996350 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4493c299-fefe-4934-b370-f2270a1b6226-config-data\") pod \"barbican-worker-7dd99f9879-qskfg\" (UID: \"4493c299-fefe-4934-b370-f2270a1b6226\") " pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:30 crc kubenswrapper[4731]: I1125 15:14:30.998251 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4493c299-fefe-4934-b370-f2270a1b6226-logs\") pod \"barbican-worker-7dd99f9879-qskfg\" (UID: \"4493c299-fefe-4934-b370-f2270a1b6226\") " pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.007109 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8ba4d2e-6323-48b0-b781-313cab464832-logs\") pod \"barbican-keystone-listener-5c7d6d4554-f24zm\" (UID: \"b8ba4d2e-6323-48b0-b781-313cab464832\") " pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.016418 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8ba4d2e-6323-48b0-b781-313cab464832-config-data-custom\") pod \"barbican-keystone-listener-5c7d6d4554-f24zm\" (UID: \"b8ba4d2e-6323-48b0-b781-313cab464832\") " pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.035522 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4493c299-fefe-4934-b370-f2270a1b6226-combined-ca-bundle\") pod \"barbican-worker-7dd99f9879-qskfg\" (UID: \"4493c299-fefe-4934-b370-f2270a1b6226\") " pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.039942 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8ba4d2e-6323-48b0-b781-313cab464832-combined-ca-bundle\") pod \"barbican-keystone-listener-5c7d6d4554-f24zm\" (UID: \"b8ba4d2e-6323-48b0-b781-313cab464832\") " pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.045806 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdrl9\" (UniqueName: \"kubernetes.io/projected/b8ba4d2e-6323-48b0-b781-313cab464832-kube-api-access-pdrl9\") pod \"barbican-keystone-listener-5c7d6d4554-f24zm\" (UID: \"b8ba4d2e-6323-48b0-b781-313cab464832\") " pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.047711 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/4493c299-fefe-4934-b370-f2270a1b6226-config-data-custom\") pod \"barbican-worker-7dd99f9879-qskfg\" (UID: \"4493c299-fefe-4934-b370-f2270a1b6226\") " pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.047894 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8ba4d2e-6323-48b0-b781-313cab464832-config-data\") pod \"barbican-keystone-listener-5c7d6d4554-f24zm\" (UID: \"b8ba4d2e-6323-48b0-b781-313cab464832\") " pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.050405 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzvd9\" (UniqueName: \"kubernetes.io/projected/4493c299-fefe-4934-b370-f2270a1b6226-kube-api-access-zzvd9\") pod \"barbican-worker-7dd99f9879-qskfg\" (UID: \"4493c299-fefe-4934-b370-f2270a1b6226\") " pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.052377 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4493c299-fefe-4934-b370-f2270a1b6226-config-data\") pod \"barbican-worker-7dd99f9879-qskfg\" (UID: \"4493c299-fefe-4934-b370-f2270a1b6226\") " pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.096804 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-7dd99f9879-qskfg" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.097831 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf94950-c565-46c7-95b1-9a62e2915979-config-data\") pod \"4bf94950-c565-46c7-95b1-9a62e2915979\" (UID: \"4bf94950-c565-46c7-95b1-9a62e2915979\") " Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.098497 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vpkf\" (UniqueName: \"kubernetes.io/projected/4bf94950-c565-46c7-95b1-9a62e2915979-kube-api-access-9vpkf\") pod \"4bf94950-c565-46c7-95b1-9a62e2915979\" (UID: \"4bf94950-c565-46c7-95b1-9a62e2915979\") " Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.098599 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf94950-c565-46c7-95b1-9a62e2915979-combined-ca-bundle\") pod \"4bf94950-c565-46c7-95b1-9a62e2915979\" (UID: \"4bf94950-c565-46c7-95b1-9a62e2915979\") " Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.099006 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-config\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.099131 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vpdn\" (UniqueName: \"kubernetes.io/projected/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-kube-api-access-5vpdn\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.099243 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.099316 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.099404 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.099526 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.124549 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf94950-c565-46c7-95b1-9a62e2915979-kube-api-access-9vpkf" (OuterVolumeSpecName: "kube-api-access-9vpkf") pod "4bf94950-c565-46c7-95b1-9a62e2915979" (UID: "4bf94950-c565-46c7-95b1-9a62e2915979"). InnerVolumeSpecName "kube-api-access-9vpkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.181214 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf94950-c565-46c7-95b1-9a62e2915979-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bf94950-c565-46c7-95b1-9a62e2915979" (UID: "4bf94950-c565-46c7-95b1-9a62e2915979"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.206394 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-9dd67bbc8-dfsxx"] Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.209300 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.209443 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-config\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.209502 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vpdn\" (UniqueName: \"kubernetes.io/projected/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-kube-api-access-5vpdn\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.209567 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.209590 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.209639 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.209737 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vpkf\" (UniqueName: \"kubernetes.io/projected/4bf94950-c565-46c7-95b1-9a62e2915979-kube-api-access-9vpkf\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.209758 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf94950-c565-46c7-95b1-9a62e2915979-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.212157 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.214141 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.234750 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf94950-c565-46c7-95b1-9a62e2915979-config-data" (OuterVolumeSpecName: "config-data") pod "4bf94950-c565-46c7-95b1-9a62e2915979" (UID: "4bf94950-c565-46c7-95b1-9a62e2915979"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.235250 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.235630 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.236053 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.236829 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-config\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.237084 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.237638 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.250757 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9dd67bbc8-dfsxx"] Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.258711 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vpdn\" (UniqueName: \"kubernetes.io/projected/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-kube-api-access-5vpdn\") pod \"dnsmasq-dns-848cf88cfc-f9m22\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.289439 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.311759 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-config-data\") pod \"barbican-api-9dd67bbc8-dfsxx\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.311837 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-config-data-custom\") pod \"barbican-api-9dd67bbc8-dfsxx\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.311900 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-logs\") pod \"barbican-api-9dd67bbc8-dfsxx\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.312071 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-combined-ca-bundle\") pod \"barbican-api-9dd67bbc8-dfsxx\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.312444 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfzhn\" (UniqueName: \"kubernetes.io/projected/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-kube-api-access-xfzhn\") pod \"barbican-api-9dd67bbc8-dfsxx\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.312610 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf94950-c565-46c7-95b1-9a62e2915979-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.414554 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-logs\") pod \"barbican-api-9dd67bbc8-dfsxx\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.414943 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-combined-ca-bundle\") pod \"barbican-api-9dd67bbc8-dfsxx\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.415085 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfzhn\" (UniqueName: \"kubernetes.io/projected/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-kube-api-access-xfzhn\") pod \"barbican-api-9dd67bbc8-dfsxx\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.415129 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-logs\") pod \"barbican-api-9dd67bbc8-dfsxx\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.415238 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-config-data\") pod \"barbican-api-9dd67bbc8-dfsxx\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.415274 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-config-data-custom\") pod \"barbican-api-9dd67bbc8-dfsxx\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.423855 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-combined-ca-bundle\") pod \"barbican-api-9dd67bbc8-dfsxx\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.438423 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfzhn\" (UniqueName: \"kubernetes.io/projected/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-kube-api-access-xfzhn\") pod \"barbican-api-9dd67bbc8-dfsxx\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.438504 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-config-data-custom\") pod \"barbican-api-9dd67bbc8-dfsxx\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.442505 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-config-data\") pod \"barbican-api-9dd67bbc8-dfsxx\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.507137 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-c55qz" event={"ID":"4bf94950-c565-46c7-95b1-9a62e2915979","Type":"ContainerDied","Data":"fea2a259ff31ea02c177ae7088be85ba945938c748850662d97162110ecc8249"} Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.507193 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fea2a259ff31ea02c177ae7088be85ba945938c748850662d97162110ecc8249" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.507283 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-c55qz" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.566297 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.625042 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-7dd99f9879-qskfg"] Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.784960 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5c7d6d4554-f24zm"] Nov 25 15:14:31 crc kubenswrapper[4731]: I1125 15:14:31.894933 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-f9m22"] Nov 25 15:14:32 crc kubenswrapper[4731]: I1125 15:14:32.056015 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-9dd67bbc8-dfsxx"] Nov 25 15:14:32 crc kubenswrapper[4731]: W1125 15:14:32.057003 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f2f3c8a_626e_442d_880e_3f6b6358c5f9.slice/crio-c51c998f463af6051c498ae9afc9d00be2716a45103ccaa6c4c2a8fd798ae8b1 WatchSource:0}: Error finding container c51c998f463af6051c498ae9afc9d00be2716a45103ccaa6c4c2a8fd798ae8b1: Status 404 returned error can't find the container with id c51c998f463af6051c498ae9afc9d00be2716a45103ccaa6c4c2a8fd798ae8b1 Nov 25 15:14:32 crc kubenswrapper[4731]: I1125 15:14:32.518564 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dd99f9879-qskfg" event={"ID":"4493c299-fefe-4934-b370-f2270a1b6226","Type":"ContainerStarted","Data":"7c6ff0da819cc898cdb1c797b5585635e83a8b1be42661402b32bc3327c45075"} Nov 25 15:14:32 crc kubenswrapper[4731]: I1125 15:14:32.522186 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9dd67bbc8-dfsxx" event={"ID":"3f2f3c8a-626e-442d-880e-3f6b6358c5f9","Type":"ContainerStarted","Data":"bd77b646d49c55c07df0920b8e18ea4db9dc2420e8dd045d73e01f7ab50dba77"} Nov 25 15:14:32 crc kubenswrapper[4731]: I1125 15:14:32.522240 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9dd67bbc8-dfsxx" event={"ID":"3f2f3c8a-626e-442d-880e-3f6b6358c5f9","Type":"ContainerStarted","Data":"2ad0348fa9586e8a3b34d4f98ad9de624e5eaa5e4f2103acae3d71db5d429e9a"} Nov 25 15:14:32 crc kubenswrapper[4731]: I1125 15:14:32.522254 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9dd67bbc8-dfsxx" event={"ID":"3f2f3c8a-626e-442d-880e-3f6b6358c5f9","Type":"ContainerStarted","Data":"c51c998f463af6051c498ae9afc9d00be2716a45103ccaa6c4c2a8fd798ae8b1"} Nov 25 15:14:32 crc kubenswrapper[4731]: I1125 15:14:32.522294 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:32 crc kubenswrapper[4731]: I1125 15:14:32.526411 4731 generic.go:334] "Generic (PLEG): container finished" podID="e845d406-36a7-4202-b123-51efd30331b8" containerID="7535d9df3e861334c651cae6618d459d26924583c180c1831d349844df7bf375" exitCode=0 Nov 25 15:14:32 crc kubenswrapper[4731]: I1125 15:14:32.526489 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e845d406-36a7-4202-b123-51efd30331b8","Type":"ContainerDied","Data":"7535d9df3e861334c651cae6618d459d26924583c180c1831d349844df7bf375"} Nov 25 15:14:32 crc kubenswrapper[4731]: I1125 15:14:32.528153 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" event={"ID":"b8ba4d2e-6323-48b0-b781-313cab464832","Type":"ContainerStarted","Data":"318d2562c67af8524a1bd13a8381b1f05cb3380f1e48653e4e468469cccabc66"} Nov 25 15:14:32 crc kubenswrapper[4731]: I1125 15:14:32.529906 4731 generic.go:334] "Generic (PLEG): container finished" podID="c8716f45-9bcd-41a5-9c5e-048ba5ddd71c" containerID="0f6e1913b0d8fc967baee1e9c64952c829aa47a4240d5698dbdbfe33430d259d" exitCode=0 Nov 25 15:14:32 crc kubenswrapper[4731]: I1125 15:14:32.529942 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" event={"ID":"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c","Type":"ContainerDied","Data":"0f6e1913b0d8fc967baee1e9c64952c829aa47a4240d5698dbdbfe33430d259d"} Nov 25 15:14:32 crc kubenswrapper[4731]: I1125 15:14:32.529964 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" event={"ID":"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c","Type":"ContainerStarted","Data":"59f53b2bebf61e64940f460f74a9c292b9b151963fd5a80e862001a08a9972bc"} Nov 25 15:14:32 crc kubenswrapper[4731]: I1125 15:14:32.550831 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-9dd67bbc8-dfsxx" podStartSLOduration=1.5507929599999999 podStartE2EDuration="1.55079296s" podCreationTimestamp="2025-11-25 15:14:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:32.541500385 +0000 UTC m=+1231.288634825" watchObservedRunningTime="2025-11-25 15:14:32.55079296 +0000 UTC m=+1231.297927390" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.199035 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.272394 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-combined-ca-bundle\") pod \"e845d406-36a7-4202-b123-51efd30331b8\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.272456 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-scripts\") pod \"e845d406-36a7-4202-b123-51efd30331b8\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.272612 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-sg-core-conf-yaml\") pod \"e845d406-36a7-4202-b123-51efd30331b8\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.272665 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdzwf\" (UniqueName: \"kubernetes.io/projected/e845d406-36a7-4202-b123-51efd30331b8-kube-api-access-wdzwf\") pod \"e845d406-36a7-4202-b123-51efd30331b8\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.272751 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e845d406-36a7-4202-b123-51efd30331b8-run-httpd\") pod \"e845d406-36a7-4202-b123-51efd30331b8\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.272796 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-config-data\") pod \"e845d406-36a7-4202-b123-51efd30331b8\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.272845 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e845d406-36a7-4202-b123-51efd30331b8-log-httpd\") pod \"e845d406-36a7-4202-b123-51efd30331b8\" (UID: \"e845d406-36a7-4202-b123-51efd30331b8\") " Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.273930 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e845d406-36a7-4202-b123-51efd30331b8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e845d406-36a7-4202-b123-51efd30331b8" (UID: "e845d406-36a7-4202-b123-51efd30331b8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.274611 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e845d406-36a7-4202-b123-51efd30331b8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e845d406-36a7-4202-b123-51efd30331b8" (UID: "e845d406-36a7-4202-b123-51efd30331b8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.297433 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-scripts" (OuterVolumeSpecName: "scripts") pod "e845d406-36a7-4202-b123-51efd30331b8" (UID: "e845d406-36a7-4202-b123-51efd30331b8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.297614 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e845d406-36a7-4202-b123-51efd30331b8-kube-api-access-wdzwf" (OuterVolumeSpecName: "kube-api-access-wdzwf") pod "e845d406-36a7-4202-b123-51efd30331b8" (UID: "e845d406-36a7-4202-b123-51efd30331b8"). InnerVolumeSpecName "kube-api-access-wdzwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.312407 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e845d406-36a7-4202-b123-51efd30331b8" (UID: "e845d406-36a7-4202-b123-51efd30331b8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.367468 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e845d406-36a7-4202-b123-51efd30331b8" (UID: "e845d406-36a7-4202-b123-51efd30331b8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.374107 4731 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e845d406-36a7-4202-b123-51efd30331b8-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.374138 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.374149 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.374158 4731 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.374166 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdzwf\" (UniqueName: \"kubernetes.io/projected/e845d406-36a7-4202-b123-51efd30331b8-kube-api-access-wdzwf\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.374175 4731 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e845d406-36a7-4202-b123-51efd30331b8-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.393000 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-config-data" (OuterVolumeSpecName: "config-data") pod "e845d406-36a7-4202-b123-51efd30331b8" (UID: "e845d406-36a7-4202-b123-51efd30331b8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.475568 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e845d406-36a7-4202-b123-51efd30331b8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.545359 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e845d406-36a7-4202-b123-51efd30331b8","Type":"ContainerDied","Data":"0295c9f3171a0360ee965a71fbc9bfc6a87bb341c0cad0974861a2509783d61f"} Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.545474 4731 scope.go:117] "RemoveContainer" containerID="941c8871693e5478a166ef3fef2c1d414cf16e3c3d98dfa5da2efba07c7c9e49" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.545639 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.553942 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" event={"ID":"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c","Type":"ContainerStarted","Data":"77fa5a304f8688792e0943994f692423fbc82f155bf91cd539a237b4fb047222"} Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.554102 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.554264 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.572468 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" podStartSLOduration=3.572448499 podStartE2EDuration="3.572448499s" podCreationTimestamp="2025-11-25 15:14:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:33.571784143 +0000 UTC m=+1232.318918593" watchObservedRunningTime="2025-11-25 15:14:33.572448499 +0000 UTC m=+1232.319582939" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.600664 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.626018 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.646385 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:14:33 crc kubenswrapper[4731]: E1125 15:14:33.646837 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e845d406-36a7-4202-b123-51efd30331b8" containerName="ceilometer-notification-agent" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.646858 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e845d406-36a7-4202-b123-51efd30331b8" containerName="ceilometer-notification-agent" Nov 25 15:14:33 crc kubenswrapper[4731]: E1125 15:14:33.646871 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e845d406-36a7-4202-b123-51efd30331b8" containerName="ceilometer-central-agent" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.646878 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e845d406-36a7-4202-b123-51efd30331b8" containerName="ceilometer-central-agent" Nov 25 15:14:33 crc kubenswrapper[4731]: E1125 15:14:33.646911 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e845d406-36a7-4202-b123-51efd30331b8" containerName="sg-core" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.646917 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e845d406-36a7-4202-b123-51efd30331b8" containerName="sg-core" Nov 25 15:14:33 crc kubenswrapper[4731]: E1125 15:14:33.646928 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e845d406-36a7-4202-b123-51efd30331b8" containerName="proxy-httpd" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.646934 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e845d406-36a7-4202-b123-51efd30331b8" containerName="proxy-httpd" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.647092 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e845d406-36a7-4202-b123-51efd30331b8" containerName="ceilometer-central-agent" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.647108 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e845d406-36a7-4202-b123-51efd30331b8" containerName="proxy-httpd" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.647118 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e845d406-36a7-4202-b123-51efd30331b8" containerName="ceilometer-notification-agent" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.647130 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e845d406-36a7-4202-b123-51efd30331b8" containerName="sg-core" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.649197 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.650759 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.652499 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.657132 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.679939 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hx67\" (UniqueName: \"kubernetes.io/projected/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-kube-api-access-4hx67\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.679999 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-log-httpd\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.680023 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-config-data\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.680048 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-run-httpd\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.680079 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-scripts\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.680120 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.680289 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.782349 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hx67\" (UniqueName: \"kubernetes.io/projected/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-kube-api-access-4hx67\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.783501 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-log-httpd\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.784013 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-log-httpd\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.784111 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-config-data\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.785001 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-run-httpd\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.785083 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-scripts\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.785399 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-run-httpd\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.785532 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.786099 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.792077 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-config-data\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.795529 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-scripts\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.796065 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.796751 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.800905 4731 scope.go:117] "RemoveContainer" containerID="28e7527159e60c8f6aff109a5bdff5c250bf0f168eb13bacf9a6003409cd2c39" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.812373 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hx67\" (UniqueName: \"kubernetes.io/projected/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-kube-api-access-4hx67\") pod \"ceilometer-0\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " pod="openstack/ceilometer-0" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.962750 4731 scope.go:117] "RemoveContainer" containerID="7535d9df3e861334c651cae6618d459d26924583c180c1831d349844df7bf375" Nov 25 15:14:33 crc kubenswrapper[4731]: I1125 15:14:33.986860 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.147695 4731 scope.go:117] "RemoveContainer" containerID="91e374ba90cfe72dbfe20b2345f7db1c7d85e2b12ca8ae0ca85fa2a4161cc54f" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.161677 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6dc4f8989b-lbfrn"] Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.163582 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.169178 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.169345 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.186531 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dc4f8989b-lbfrn"] Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.198051 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns6z5\" (UniqueName: \"kubernetes.io/projected/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-kube-api-access-ns6z5\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.198089 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-config-data-custom\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.198118 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-logs\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.198147 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-combined-ca-bundle\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.198173 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-config-data\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.198221 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-internal-tls-certs\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.198266 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-public-tls-certs\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.304739 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-public-tls-certs\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.304872 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns6z5\" (UniqueName: \"kubernetes.io/projected/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-kube-api-access-ns6z5\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.304900 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-config-data-custom\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.304936 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-logs\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.304973 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-combined-ca-bundle\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.305008 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-config-data\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.305076 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-internal-tls-certs\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.311030 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-config-data-custom\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.311302 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-internal-tls-certs\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.311406 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-logs\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.313766 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-combined-ca-bundle\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.315120 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-config-data\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.317324 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-public-tls-certs\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.333249 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns6z5\" (UniqueName: \"kubernetes.io/projected/5b11609c-ee96-473a-b1e6-dbbf07a9bab2-kube-api-access-ns6z5\") pod \"barbican-api-6dc4f8989b-lbfrn\" (UID: \"5b11609c-ee96-473a-b1e6-dbbf07a9bab2\") " pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.465874 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:14:34 crc kubenswrapper[4731]: W1125 15:14:34.467779 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d30f6bd_f2f7_4edb_a26a_a63d92e2414c.slice/crio-d11fff7a6c10849745d713ced2b910ab0e9aea70bc4f5507d71e01d8018b6214 WatchSource:0}: Error finding container d11fff7a6c10849745d713ced2b910ab0e9aea70bc4f5507d71e01d8018b6214: Status 404 returned error can't find the container with id d11fff7a6c10849745d713ced2b910ab0e9aea70bc4f5507d71e01d8018b6214 Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.489771 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.564046 4731 generic.go:334] "Generic (PLEG): container finished" podID="d828d0bf-a1e4-45a4-a840-4ddd1f84ed79" containerID="7e4aadd6127f2215a50ad8ce519d0d9de9adad382ad612c5cc67fb349208655d" exitCode=0 Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.564136 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dv2l4" event={"ID":"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79","Type":"ContainerDied","Data":"7e4aadd6127f2215a50ad8ce519d0d9de9adad382ad612c5cc67fb349208655d"} Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.565807 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c","Type":"ContainerStarted","Data":"d11fff7a6c10849745d713ced2b910ab0e9aea70bc4f5507d71e01d8018b6214"} Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.578727 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" event={"ID":"b8ba4d2e-6323-48b0-b781-313cab464832","Type":"ContainerStarted","Data":"f509d269c7b8261fa9d770ec055cd1123a774127512dc406a1a00a2b53cd06fa"} Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.578972 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" event={"ID":"b8ba4d2e-6323-48b0-b781-313cab464832","Type":"ContainerStarted","Data":"942d094e07bfb5d7d5cc5a7a352e4f0651b70a4b13fc37ceb480526b3bc1325b"} Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.589562 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dd99f9879-qskfg" event={"ID":"4493c299-fefe-4934-b370-f2270a1b6226","Type":"ContainerStarted","Data":"dee09ee5768bbde4818ff6962462f8f192e143a5812220de10449404f145434f"} Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.589627 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-7dd99f9879-qskfg" event={"ID":"4493c299-fefe-4934-b370-f2270a1b6226","Type":"ContainerStarted","Data":"54e1396b0a9600b04a90436897e11fc8cc430efe88dd8c735266b020de12912f"} Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.619010 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5c7d6d4554-f24zm" podStartSLOduration=2.602930274 podStartE2EDuration="4.618977496s" podCreationTimestamp="2025-11-25 15:14:30 +0000 UTC" firstStartedPulling="2025-11-25 15:14:31.794723509 +0000 UTC m=+1230.541857929" lastFinishedPulling="2025-11-25 15:14:33.810770731 +0000 UTC m=+1232.557905151" observedRunningTime="2025-11-25 15:14:34.61198933 +0000 UTC m=+1233.359123760" watchObservedRunningTime="2025-11-25 15:14:34.618977496 +0000 UTC m=+1233.366111926" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.666796 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-7dd99f9879-qskfg" podStartSLOduration=2.492646663 podStartE2EDuration="4.666772482s" podCreationTimestamp="2025-11-25 15:14:30 +0000 UTC" firstStartedPulling="2025-11-25 15:14:31.631665516 +0000 UTC m=+1230.378799946" lastFinishedPulling="2025-11-25 15:14:33.805791325 +0000 UTC m=+1232.552925765" observedRunningTime="2025-11-25 15:14:34.643433103 +0000 UTC m=+1233.390567533" watchObservedRunningTime="2025-11-25 15:14:34.666772482 +0000 UTC m=+1233.413906912" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.829696 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e845d406-36a7-4202-b123-51efd30331b8" path="/var/lib/kubelet/pods/e845d406-36a7-4202-b123-51efd30331b8/volumes" Nov 25 15:14:34 crc kubenswrapper[4731]: I1125 15:14:34.996158 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6dc4f8989b-lbfrn"] Nov 25 15:14:34 crc kubenswrapper[4731]: W1125 15:14:34.998497 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b11609c_ee96_473a_b1e6_dbbf07a9bab2.slice/crio-6b21097c96dc7c90d729c873a6dfa6bd3d1c1648b8a7d1575cf88378843bc1d1 WatchSource:0}: Error finding container 6b21097c96dc7c90d729c873a6dfa6bd3d1c1648b8a7d1575cf88378843bc1d1: Status 404 returned error can't find the container with id 6b21097c96dc7c90d729c873a6dfa6bd3d1c1648b8a7d1575cf88378843bc1d1 Nov 25 15:14:35 crc kubenswrapper[4731]: I1125 15:14:35.598149 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c","Type":"ContainerStarted","Data":"6a9167a272110e6418746593fdeee614209fc2aa77060e663aa897b38d286125"} Nov 25 15:14:35 crc kubenswrapper[4731]: I1125 15:14:35.599732 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dc4f8989b-lbfrn" event={"ID":"5b11609c-ee96-473a-b1e6-dbbf07a9bab2","Type":"ContainerStarted","Data":"da513ba06b3ed7282db93d0883fd6583adcf27ff6605e4620a47fac07e5ef3a4"} Nov 25 15:14:35 crc kubenswrapper[4731]: I1125 15:14:35.599776 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dc4f8989b-lbfrn" event={"ID":"5b11609c-ee96-473a-b1e6-dbbf07a9bab2","Type":"ContainerStarted","Data":"c6256649e6d863acdab470e449497d3d7b1a80a9e51fbe1381a93d3dc839f73d"} Nov 25 15:14:35 crc kubenswrapper[4731]: I1125 15:14:35.599787 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6dc4f8989b-lbfrn" event={"ID":"5b11609c-ee96-473a-b1e6-dbbf07a9bab2","Type":"ContainerStarted","Data":"6b21097c96dc7c90d729c873a6dfa6bd3d1c1648b8a7d1575cf88378843bc1d1"} Nov 25 15:14:35 crc kubenswrapper[4731]: I1125 15:14:35.629999 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6dc4f8989b-lbfrn" podStartSLOduration=1.629980556 podStartE2EDuration="1.629980556s" podCreationTimestamp="2025-11-25 15:14:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:35.623052322 +0000 UTC m=+1234.370186752" watchObservedRunningTime="2025-11-25 15:14:35.629980556 +0000 UTC m=+1234.377114986" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.048552 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.179644 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-combined-ca-bundle\") pod \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.179741 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-config-data\") pod \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.179778 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-etc-machine-id\") pod \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.179804 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-scripts\") pod \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.179897 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5wpl\" (UniqueName: \"kubernetes.io/projected/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-kube-api-access-w5wpl\") pod \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.179903 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d828d0bf-a1e4-45a4-a840-4ddd1f84ed79" (UID: "d828d0bf-a1e4-45a4-a840-4ddd1f84ed79"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.179963 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-db-sync-config-data\") pod \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\" (UID: \"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79\") " Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.180312 4731 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.184240 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-scripts" (OuterVolumeSpecName: "scripts") pod "d828d0bf-a1e4-45a4-a840-4ddd1f84ed79" (UID: "d828d0bf-a1e4-45a4-a840-4ddd1f84ed79"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.184322 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d828d0bf-a1e4-45a4-a840-4ddd1f84ed79" (UID: "d828d0bf-a1e4-45a4-a840-4ddd1f84ed79"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.186933 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-kube-api-access-w5wpl" (OuterVolumeSpecName: "kube-api-access-w5wpl") pod "d828d0bf-a1e4-45a4-a840-4ddd1f84ed79" (UID: "d828d0bf-a1e4-45a4-a840-4ddd1f84ed79"). InnerVolumeSpecName "kube-api-access-w5wpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.206478 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d828d0bf-a1e4-45a4-a840-4ddd1f84ed79" (UID: "d828d0bf-a1e4-45a4-a840-4ddd1f84ed79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.227595 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-config-data" (OuterVolumeSpecName: "config-data") pod "d828d0bf-a1e4-45a4-a840-4ddd1f84ed79" (UID: "d828d0bf-a1e4-45a4-a840-4ddd1f84ed79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.286337 4731 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.286622 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.286634 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.286644 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.286654 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5wpl\" (UniqueName: \"kubernetes.io/projected/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79-kube-api-access-w5wpl\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.652187 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-dv2l4" event={"ID":"d828d0bf-a1e4-45a4-a840-4ddd1f84ed79","Type":"ContainerDied","Data":"223179c58712b5c516aba895b5f0cfd45cab345636a75bb481155e2d58d9fe96"} Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.652238 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="223179c58712b5c516aba895b5f0cfd45cab345636a75bb481155e2d58d9fe96" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.652323 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-dv2l4" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.707108 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c","Type":"ContainerStarted","Data":"7225a168f9acddc98980b21367a098dd8279b971d46631aaa577fe1b52d22c59"} Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.707158 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.707194 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.996887 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:14:36 crc kubenswrapper[4731]: E1125 15:14:36.997592 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d828d0bf-a1e4-45a4-a840-4ddd1f84ed79" containerName="cinder-db-sync" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.997611 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d828d0bf-a1e4-45a4-a840-4ddd1f84ed79" containerName="cinder-db-sync" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.997835 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d828d0bf-a1e4-45a4-a840-4ddd1f84ed79" containerName="cinder-db-sync" Nov 25 15:14:36 crc kubenswrapper[4731]: I1125 15:14:36.998771 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.002169 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.002456 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.002895 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.003044 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7ql8z" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.013455 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.087042 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-f9m22"] Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.087267 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" podUID="c8716f45-9bcd-41a5-9c5e-048ba5ddd71c" containerName="dnsmasq-dns" containerID="cri-o://77fa5a304f8688792e0943994f692423fbc82f155bf91cd539a237b4fb047222" gracePeriod=10 Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.094003 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.106890 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7eb67bcd-fcd6-45fe-bb98-bc5334032775-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.107172 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.107264 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-scripts\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.107352 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.107468 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gdk5\" (UniqueName: \"kubernetes.io/projected/7eb67bcd-fcd6-45fe-bb98-bc5334032775-kube-api-access-8gdk5\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.107578 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-config-data\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.154602 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-vksdl"] Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.157831 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.189672 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-vksdl"] Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.208991 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-config-data\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.209067 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7eb67bcd-fcd6-45fe-bb98-bc5334032775-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.209124 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.209152 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-scripts\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.209188 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.209230 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gdk5\" (UniqueName: \"kubernetes.io/projected/7eb67bcd-fcd6-45fe-bb98-bc5334032775-kube-api-access-8gdk5\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.222159 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7eb67bcd-fcd6-45fe-bb98-bc5334032775-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.227552 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.227795 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-config-data\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.228140 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-scripts\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.229877 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.251171 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gdk5\" (UniqueName: \"kubernetes.io/projected/7eb67bcd-fcd6-45fe-bb98-bc5334032775-kube-api-access-8gdk5\") pod \"cinder-scheduler-0\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.311984 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.312049 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-dns-svc\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.312101 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66t9v\" (UniqueName: \"kubernetes.io/projected/0bdd6185-ea4d-4507-b925-5f093fc33931-kube-api-access-66t9v\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.312126 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.312155 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-config\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.312191 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.330509 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.332018 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.332098 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.335458 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.349262 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.413955 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.414301 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-dns-svc\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.414336 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.414372 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9phgn\" (UniqueName: \"kubernetes.io/projected/33e14cc6-322e-4d98-9f70-0bff81ffb319-kube-api-access-9phgn\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.414399 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-config-data-custom\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.414485 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66t9v\" (UniqueName: \"kubernetes.io/projected/0bdd6185-ea4d-4507-b925-5f093fc33931-kube-api-access-66t9v\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.414508 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.414534 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-config\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.414564 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.414593 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-scripts\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.414617 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33e14cc6-322e-4d98-9f70-0bff81ffb319-etc-machine-id\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.414652 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-config-data\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.414755 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33e14cc6-322e-4d98-9f70-0bff81ffb319-logs\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.415828 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.416330 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-dns-svc\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.417490 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.417853 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-config\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.418629 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.444127 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66t9v\" (UniqueName: \"kubernetes.io/projected/0bdd6185-ea4d-4507-b925-5f093fc33931-kube-api-access-66t9v\") pod \"dnsmasq-dns-6578955fd5-vksdl\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.518768 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33e14cc6-322e-4d98-9f70-0bff81ffb319-logs\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.518859 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.518884 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9phgn\" (UniqueName: \"kubernetes.io/projected/33e14cc6-322e-4d98-9f70-0bff81ffb319-kube-api-access-9phgn\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.518902 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-config-data-custom\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.518947 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-scripts\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.518965 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33e14cc6-322e-4d98-9f70-0bff81ffb319-etc-machine-id\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.518988 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-config-data\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.519399 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33e14cc6-322e-4d98-9f70-0bff81ffb319-logs\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.519791 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33e14cc6-322e-4d98-9f70-0bff81ffb319-etc-machine-id\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.531536 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-scripts\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.531613 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.539601 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-config-data-custom\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.539616 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-config-data\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.557917 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9phgn\" (UniqueName: \"kubernetes.io/projected/33e14cc6-322e-4d98-9f70-0bff81ffb319-kube-api-access-9phgn\") pod \"cinder-api-0\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.646862 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.682638 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.726714 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c","Type":"ContainerStarted","Data":"981952b611751d2a4d905c6db56ef9324c0cae6c74ec8685ba3623cdcb9057da"} Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.744995 4731 generic.go:334] "Generic (PLEG): container finished" podID="c8716f45-9bcd-41a5-9c5e-048ba5ddd71c" containerID="77fa5a304f8688792e0943994f692423fbc82f155bf91cd539a237b4fb047222" exitCode=0 Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.745307 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" event={"ID":"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c","Type":"ContainerDied","Data":"77fa5a304f8688792e0943994f692423fbc82f155bf91cd539a237b4fb047222"} Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.813501 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.934369 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vpdn\" (UniqueName: \"kubernetes.io/projected/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-kube-api-access-5vpdn\") pod \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.934543 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-config\") pod \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.934588 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-ovsdbserver-nb\") pod \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.934666 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-dns-svc\") pod \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.934698 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-ovsdbserver-sb\") pod \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.934725 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-dns-swift-storage-0\") pod \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\" (UID: \"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c\") " Nov 25 15:14:37 crc kubenswrapper[4731]: I1125 15:14:37.954056 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-kube-api-access-5vpdn" (OuterVolumeSpecName: "kube-api-access-5vpdn") pod "c8716f45-9bcd-41a5-9c5e-048ba5ddd71c" (UID: "c8716f45-9bcd-41a5-9c5e-048ba5ddd71c"). InnerVolumeSpecName "kube-api-access-5vpdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.036561 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vpdn\" (UniqueName: \"kubernetes.io/projected/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-kube-api-access-5vpdn\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.037687 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.054169 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c8716f45-9bcd-41a5-9c5e-048ba5ddd71c" (UID: "c8716f45-9bcd-41a5-9c5e-048ba5ddd71c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.094436 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-config" (OuterVolumeSpecName: "config") pod "c8716f45-9bcd-41a5-9c5e-048ba5ddd71c" (UID: "c8716f45-9bcd-41a5-9c5e-048ba5ddd71c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.102173 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c8716f45-9bcd-41a5-9c5e-048ba5ddd71c" (UID: "c8716f45-9bcd-41a5-9c5e-048ba5ddd71c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.119169 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c8716f45-9bcd-41a5-9c5e-048ba5ddd71c" (UID: "c8716f45-9bcd-41a5-9c5e-048ba5ddd71c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.122282 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c8716f45-9bcd-41a5-9c5e-048ba5ddd71c" (UID: "c8716f45-9bcd-41a5-9c5e-048ba5ddd71c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.139916 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.139941 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.139950 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.139971 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.139979 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.227375 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-vksdl"] Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.295540 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.759221 4731 generic.go:334] "Generic (PLEG): container finished" podID="0bdd6185-ea4d-4507-b925-5f093fc33931" containerID="f3ca23f9c6ee73d1619cbe1828ef32e5428ec56700c7e26a3b082db253c73932" exitCode=0 Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.760061 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-vksdl" event={"ID":"0bdd6185-ea4d-4507-b925-5f093fc33931","Type":"ContainerDied","Data":"f3ca23f9c6ee73d1619cbe1828ef32e5428ec56700c7e26a3b082db253c73932"} Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.760103 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-vksdl" event={"ID":"0bdd6185-ea4d-4507-b925-5f093fc33931","Type":"ContainerStarted","Data":"daabd4dcd26f8e97f68fccc519397e8481f4d620380ab559ee20a87334631a91"} Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.762921 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7eb67bcd-fcd6-45fe-bb98-bc5334032775","Type":"ContainerStarted","Data":"a507a53777793bfbf8a5162e0f72554f727df91ce3e7d8f16c45dc1054ab6d46"} Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.766123 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c","Type":"ContainerStarted","Data":"854eb8894fbbf4fdaf45ae7258400f4b64c8926e7124e8730fd3adaa578df932"} Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.766885 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.772027 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"33e14cc6-322e-4d98-9f70-0bff81ffb319","Type":"ContainerStarted","Data":"a3f5df2b1ea5dda8e5e9181da1d5c963765e3e11f4412cf9f4be590e7588ada7"} Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.787117 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.787158 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-848cf88cfc-f9m22" event={"ID":"c8716f45-9bcd-41a5-9c5e-048ba5ddd71c","Type":"ContainerDied","Data":"59f53b2bebf61e64940f460f74a9c292b9b151963fd5a80e862001a08a9972bc"} Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.787196 4731 scope.go:117] "RemoveContainer" containerID="77fa5a304f8688792e0943994f692423fbc82f155bf91cd539a237b4fb047222" Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.810650 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.155937049 podStartE2EDuration="5.810632933s" podCreationTimestamp="2025-11-25 15:14:33 +0000 UTC" firstStartedPulling="2025-11-25 15:14:34.470427639 +0000 UTC m=+1233.217562069" lastFinishedPulling="2025-11-25 15:14:38.125123523 +0000 UTC m=+1236.872257953" observedRunningTime="2025-11-25 15:14:38.809886714 +0000 UTC m=+1237.557021154" watchObservedRunningTime="2025-11-25 15:14:38.810632933 +0000 UTC m=+1237.557767363" Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.891894 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-f9m22"] Nov 25 15:14:38 crc kubenswrapper[4731]: I1125 15:14:38.899276 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-f9m22"] Nov 25 15:14:39 crc kubenswrapper[4731]: I1125 15:14:39.081415 4731 scope.go:117] "RemoveContainer" containerID="0f6e1913b0d8fc967baee1e9c64952c829aa47a4240d5698dbdbfe33430d259d" Nov 25 15:14:39 crc kubenswrapper[4731]: I1125 15:14:39.801347 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"33e14cc6-322e-4d98-9f70-0bff81ffb319","Type":"ContainerStarted","Data":"aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01"} Nov 25 15:14:39 crc kubenswrapper[4731]: I1125 15:14:39.810294 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-vksdl" event={"ID":"0bdd6185-ea4d-4507-b925-5f093fc33931","Type":"ContainerStarted","Data":"699cdda3f9173602e313c50437b2b7f372b5b88d2a6a62475d9d7c8d499abfda"} Nov 25 15:14:39 crc kubenswrapper[4731]: I1125 15:14:39.810344 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:39 crc kubenswrapper[4731]: I1125 15:14:39.827966 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-vksdl" podStartSLOduration=2.827948233 podStartE2EDuration="2.827948233s" podCreationTimestamp="2025-11-25 15:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:39.827380089 +0000 UTC m=+1238.574514519" watchObservedRunningTime="2025-11-25 15:14:39.827948233 +0000 UTC m=+1238.575082663" Nov 25 15:14:40 crc kubenswrapper[4731]: I1125 15:14:40.093291 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:14:40 crc kubenswrapper[4731]: I1125 15:14:40.818157 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="33e14cc6-322e-4d98-9f70-0bff81ffb319" containerName="cinder-api-log" containerID="cri-o://aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01" gracePeriod=30 Nov 25 15:14:40 crc kubenswrapper[4731]: I1125 15:14:40.818229 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="33e14cc6-322e-4d98-9f70-0bff81ffb319" containerName="cinder-api" containerID="cri-o://7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4" gracePeriod=30 Nov 25 15:14:40 crc kubenswrapper[4731]: I1125 15:14:40.823867 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8716f45-9bcd-41a5-9c5e-048ba5ddd71c" path="/var/lib/kubelet/pods/c8716f45-9bcd-41a5-9c5e-048ba5ddd71c/volumes" Nov 25 15:14:40 crc kubenswrapper[4731]: I1125 15:14:40.825000 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 15:14:40 crc kubenswrapper[4731]: I1125 15:14:40.825023 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"33e14cc6-322e-4d98-9f70-0bff81ffb319","Type":"ContainerStarted","Data":"7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4"} Nov 25 15:14:40 crc kubenswrapper[4731]: I1125 15:14:40.830909 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7eb67bcd-fcd6-45fe-bb98-bc5334032775","Type":"ContainerStarted","Data":"0c1e987919213c8ce10fb99b2fd1546baba5a7ec8fbdff5a0e10886ee6198c7e"} Nov 25 15:14:40 crc kubenswrapper[4731]: I1125 15:14:40.830972 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7eb67bcd-fcd6-45fe-bb98-bc5334032775","Type":"ContainerStarted","Data":"c25726465031a976692a06fbe53809567470f42f7fc5bb820ebda0c9bfdbf73d"} Nov 25 15:14:40 crc kubenswrapper[4731]: I1125 15:14:40.851595 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.851575572 podStartE2EDuration="3.851575572s" podCreationTimestamp="2025-11-25 15:14:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:40.839750324 +0000 UTC m=+1239.586884754" watchObservedRunningTime="2025-11-25 15:14:40.851575572 +0000 UTC m=+1239.598710002" Nov 25 15:14:40 crc kubenswrapper[4731]: I1125 15:14:40.872543 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.781201615 podStartE2EDuration="4.872522211s" podCreationTimestamp="2025-11-25 15:14:36 +0000 UTC" firstStartedPulling="2025-11-25 15:14:38.058965604 +0000 UTC m=+1236.806100034" lastFinishedPulling="2025-11-25 15:14:39.1502862 +0000 UTC m=+1237.897420630" observedRunningTime="2025-11-25 15:14:40.866072518 +0000 UTC m=+1239.613206968" watchObservedRunningTime="2025-11-25 15:14:40.872522211 +0000 UTC m=+1239.619656651" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.554201 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.629633 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9phgn\" (UniqueName: \"kubernetes.io/projected/33e14cc6-322e-4d98-9f70-0bff81ffb319-kube-api-access-9phgn\") pod \"33e14cc6-322e-4d98-9f70-0bff81ffb319\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.629725 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33e14cc6-322e-4d98-9f70-0bff81ffb319-logs\") pod \"33e14cc6-322e-4d98-9f70-0bff81ffb319\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.629765 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-combined-ca-bundle\") pod \"33e14cc6-322e-4d98-9f70-0bff81ffb319\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.629795 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-config-data-custom\") pod \"33e14cc6-322e-4d98-9f70-0bff81ffb319\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.629809 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33e14cc6-322e-4d98-9f70-0bff81ffb319-etc-machine-id\") pod \"33e14cc6-322e-4d98-9f70-0bff81ffb319\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.629841 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-scripts\") pod \"33e14cc6-322e-4d98-9f70-0bff81ffb319\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.629888 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-config-data\") pod \"33e14cc6-322e-4d98-9f70-0bff81ffb319\" (UID: \"33e14cc6-322e-4d98-9f70-0bff81ffb319\") " Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.630115 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/33e14cc6-322e-4d98-9f70-0bff81ffb319-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "33e14cc6-322e-4d98-9f70-0bff81ffb319" (UID: "33e14cc6-322e-4d98-9f70-0bff81ffb319"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.630388 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33e14cc6-322e-4d98-9f70-0bff81ffb319-logs" (OuterVolumeSpecName: "logs") pod "33e14cc6-322e-4d98-9f70-0bff81ffb319" (UID: "33e14cc6-322e-4d98-9f70-0bff81ffb319"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.630504 4731 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/33e14cc6-322e-4d98-9f70-0bff81ffb319-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.647623 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-scripts" (OuterVolumeSpecName: "scripts") pod "33e14cc6-322e-4d98-9f70-0bff81ffb319" (UID: "33e14cc6-322e-4d98-9f70-0bff81ffb319"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.651965 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "33e14cc6-322e-4d98-9f70-0bff81ffb319" (UID: "33e14cc6-322e-4d98-9f70-0bff81ffb319"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.653981 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33e14cc6-322e-4d98-9f70-0bff81ffb319-kube-api-access-9phgn" (OuterVolumeSpecName: "kube-api-access-9phgn") pod "33e14cc6-322e-4d98-9f70-0bff81ffb319" (UID: "33e14cc6-322e-4d98-9f70-0bff81ffb319"). InnerVolumeSpecName "kube-api-access-9phgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.687995 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33e14cc6-322e-4d98-9f70-0bff81ffb319" (UID: "33e14cc6-322e-4d98-9f70-0bff81ffb319"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.718996 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-config-data" (OuterVolumeSpecName: "config-data") pod "33e14cc6-322e-4d98-9f70-0bff81ffb319" (UID: "33e14cc6-322e-4d98-9f70-0bff81ffb319"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.732080 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/33e14cc6-322e-4d98-9f70-0bff81ffb319-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.732112 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.732125 4731 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.732133 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.732140 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/33e14cc6-322e-4d98-9f70-0bff81ffb319-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.732151 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9phgn\" (UniqueName: \"kubernetes.io/projected/33e14cc6-322e-4d98-9f70-0bff81ffb319-kube-api-access-9phgn\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.842127 4731 generic.go:334] "Generic (PLEG): container finished" podID="33e14cc6-322e-4d98-9f70-0bff81ffb319" containerID="7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4" exitCode=0 Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.842165 4731 generic.go:334] "Generic (PLEG): container finished" podID="33e14cc6-322e-4d98-9f70-0bff81ffb319" containerID="aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01" exitCode=143 Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.842181 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.842206 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"33e14cc6-322e-4d98-9f70-0bff81ffb319","Type":"ContainerDied","Data":"7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4"} Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.842244 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"33e14cc6-322e-4d98-9f70-0bff81ffb319","Type":"ContainerDied","Data":"aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01"} Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.842256 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"33e14cc6-322e-4d98-9f70-0bff81ffb319","Type":"ContainerDied","Data":"a3f5df2b1ea5dda8e5e9181da1d5c963765e3e11f4412cf9f4be590e7588ada7"} Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.842275 4731 scope.go:117] "RemoveContainer" containerID="7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.886902 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.901064 4731 scope.go:117] "RemoveContainer" containerID="aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.926962 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.945476 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:14:41 crc kubenswrapper[4731]: E1125 15:14:41.945928 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8716f45-9bcd-41a5-9c5e-048ba5ddd71c" containerName="dnsmasq-dns" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.945949 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8716f45-9bcd-41a5-9c5e-048ba5ddd71c" containerName="dnsmasq-dns" Nov 25 15:14:41 crc kubenswrapper[4731]: E1125 15:14:41.945960 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e14cc6-322e-4d98-9f70-0bff81ffb319" containerName="cinder-api-log" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.945966 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e14cc6-322e-4d98-9f70-0bff81ffb319" containerName="cinder-api-log" Nov 25 15:14:41 crc kubenswrapper[4731]: E1125 15:14:41.945994 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33e14cc6-322e-4d98-9f70-0bff81ffb319" containerName="cinder-api" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.946004 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="33e14cc6-322e-4d98-9f70-0bff81ffb319" containerName="cinder-api" Nov 25 15:14:41 crc kubenswrapper[4731]: E1125 15:14:41.946016 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8716f45-9bcd-41a5-9c5e-048ba5ddd71c" containerName="init" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.946022 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8716f45-9bcd-41a5-9c5e-048ba5ddd71c" containerName="init" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.946201 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e14cc6-322e-4d98-9f70-0bff81ffb319" containerName="cinder-api-log" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.946223 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="33e14cc6-322e-4d98-9f70-0bff81ffb319" containerName="cinder-api" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.946236 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8716f45-9bcd-41a5-9c5e-048ba5ddd71c" containerName="dnsmasq-dns" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.947153 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.955021 4731 scope.go:117] "RemoveContainer" containerID="7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4" Nov 25 15:14:41 crc kubenswrapper[4731]: E1125 15:14:41.955460 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4\": container with ID starting with 7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4 not found: ID does not exist" containerID="7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.955490 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4"} err="failed to get container status \"7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4\": rpc error: code = NotFound desc = could not find container \"7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4\": container with ID starting with 7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4 not found: ID does not exist" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.955514 4731 scope.go:117] "RemoveContainer" containerID="aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01" Nov 25 15:14:41 crc kubenswrapper[4731]: E1125 15:14:41.955858 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01\": container with ID starting with aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01 not found: ID does not exist" containerID="aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.955893 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01"} err="failed to get container status \"aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01\": rpc error: code = NotFound desc = could not find container \"aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01\": container with ID starting with aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01 not found: ID does not exist" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.955908 4731 scope.go:117] "RemoveContainer" containerID="7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.960005 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4"} err="failed to get container status \"7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4\": rpc error: code = NotFound desc = could not find container \"7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4\": container with ID starting with 7277d70a00fc1d7a1f5ce73c36a7f0c2365dbf57cefb421ae0f258eb29b681a4 not found: ID does not exist" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.960064 4731 scope.go:117] "RemoveContainer" containerID="aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.964352 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.964542 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.964639 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.964766 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01"} err="failed to get container status \"aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01\": rpc error: code = NotFound desc = could not find container \"aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01\": container with ID starting with aa899f70cd50cb24a21341da29ea6a7b1563edd448b1b300c434b4b42ffe4a01 not found: ID does not exist" Nov 25 15:14:41 crc kubenswrapper[4731]: I1125 15:14:41.990841 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.037575 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-config-data-custom\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.037635 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.037677 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.037713 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-config-data\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.037763 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-scripts\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.037781 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-logs\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.037835 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q5qx\" (UniqueName: \"kubernetes.io/projected/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-kube-api-access-6q5qx\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.037986 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.038048 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.139935 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.140024 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.140061 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-config-data-custom\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.140093 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.140120 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.140144 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-config-data\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.140202 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-scripts\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.140223 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-logs\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.140249 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q5qx\" (UniqueName: \"kubernetes.io/projected/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-kube-api-access-6q5qx\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.141956 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.148302 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-logs\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.152424 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-public-tls-certs\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.153270 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-scripts\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.153390 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.158570 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-config-data-custom\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.162860 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.164906 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-config-data\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.165556 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q5qx\" (UniqueName: \"kubernetes.io/projected/a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3-kube-api-access-6q5qx\") pod \"cinder-api-0\" (UID: \"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3\") " pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.299183 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.351215 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.842367 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33e14cc6-322e-4d98-9f70-0bff81ffb319" path="/var/lib/kubelet/pods/33e14cc6-322e-4d98-9f70-0bff81ffb319/volumes" Nov 25 15:14:42 crc kubenswrapper[4731]: I1125 15:14:42.864580 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Nov 25 15:14:42 crc kubenswrapper[4731]: W1125 15:14:42.871147 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda95c46d5_eb91_4400_ae4a_ec69ddfdb8e3.slice/crio-c1f06cf668dcd513ee2ec1d9e92607b5ef62afe9a5b40e1cbfc7d61a6bf9c06e WatchSource:0}: Error finding container c1f06cf668dcd513ee2ec1d9e92607b5ef62afe9a5b40e1cbfc7d61a6bf9c06e: Status 404 returned error can't find the container with id c1f06cf668dcd513ee2ec1d9e92607b5ef62afe9a5b40e1cbfc7d61a6bf9c06e Nov 25 15:14:43 crc kubenswrapper[4731]: I1125 15:14:43.436356 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:43 crc kubenswrapper[4731]: I1125 15:14:43.468602 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:43 crc kubenswrapper[4731]: I1125 15:14:43.889777 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3","Type":"ContainerStarted","Data":"159ad4c7c0226450919099fa50ede042a69553fd345ebfc715b0824a38e86a1f"} Nov 25 15:14:43 crc kubenswrapper[4731]: I1125 15:14:43.890197 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3","Type":"ContainerStarted","Data":"c1f06cf668dcd513ee2ec1d9e92607b5ef62afe9a5b40e1cbfc7d61a6bf9c06e"} Nov 25 15:14:44 crc kubenswrapper[4731]: I1125 15:14:44.897877 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3","Type":"ContainerStarted","Data":"613e63151ce033f37c75b2af26c89321ef7e12c8d226e7afe4180492041141dc"} Nov 25 15:14:44 crc kubenswrapper[4731]: I1125 15:14:44.899007 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Nov 25 15:14:44 crc kubenswrapper[4731]: I1125 15:14:44.928072 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.928047264 podStartE2EDuration="3.928047264s" podCreationTimestamp="2025-11-25 15:14:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:44.919593471 +0000 UTC m=+1243.666727911" watchObservedRunningTime="2025-11-25 15:14:44.928047264 +0000 UTC m=+1243.675181694" Nov 25 15:14:46 crc kubenswrapper[4731]: I1125 15:14:46.267513 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:46 crc kubenswrapper[4731]: I1125 15:14:46.323429 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6dc4f8989b-lbfrn" Nov 25 15:14:46 crc kubenswrapper[4731]: I1125 15:14:46.390597 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-9dd67bbc8-dfsxx"] Nov 25 15:14:46 crc kubenswrapper[4731]: I1125 15:14:46.390806 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-9dd67bbc8-dfsxx" podUID="3f2f3c8a-626e-442d-880e-3f6b6358c5f9" containerName="barbican-api-log" containerID="cri-o://2ad0348fa9586e8a3b34d4f98ad9de624e5eaa5e4f2103acae3d71db5d429e9a" gracePeriod=30 Nov 25 15:14:46 crc kubenswrapper[4731]: I1125 15:14:46.391228 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-9dd67bbc8-dfsxx" podUID="3f2f3c8a-626e-442d-880e-3f6b6358c5f9" containerName="barbican-api" containerID="cri-o://bd77b646d49c55c07df0920b8e18ea4db9dc2420e8dd045d73e01f7ab50dba77" gracePeriod=30 Nov 25 15:14:46 crc kubenswrapper[4731]: I1125 15:14:46.917446 4731 generic.go:334] "Generic (PLEG): container finished" podID="3f2f3c8a-626e-442d-880e-3f6b6358c5f9" containerID="2ad0348fa9586e8a3b34d4f98ad9de624e5eaa5e4f2103acae3d71db5d429e9a" exitCode=143 Nov 25 15:14:46 crc kubenswrapper[4731]: I1125 15:14:46.919109 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9dd67bbc8-dfsxx" event={"ID":"3f2f3c8a-626e-442d-880e-3f6b6358c5f9","Type":"ContainerDied","Data":"2ad0348fa9586e8a3b34d4f98ad9de624e5eaa5e4f2103acae3d71db5d429e9a"} Nov 25 15:14:47 crc kubenswrapper[4731]: I1125 15:14:47.589493 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 15:14:47 crc kubenswrapper[4731]: I1125 15:14:47.633838 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:14:47 crc kubenswrapper[4731]: I1125 15:14:47.649000 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:14:47 crc kubenswrapper[4731]: I1125 15:14:47.712311 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gq9xr"] Nov 25 15:14:47 crc kubenswrapper[4731]: I1125 15:14:47.712843 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" podUID="b1a6ad1b-747e-4a9e-a567-ff881fa18cfc" containerName="dnsmasq-dns" containerID="cri-o://1016014fa83f77caa25d75cae216d0fd13a5f3c882b8edd5beec87f0ef88fc76" gracePeriod=10 Nov 25 15:14:47 crc kubenswrapper[4731]: I1125 15:14:47.931023 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" podUID="b1a6ad1b-747e-4a9e-a567-ff881fa18cfc" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.150:5353: connect: connection refused" Nov 25 15:14:47 crc kubenswrapper[4731]: I1125 15:14:47.932885 4731 generic.go:334] "Generic (PLEG): container finished" podID="b1a6ad1b-747e-4a9e-a567-ff881fa18cfc" containerID="1016014fa83f77caa25d75cae216d0fd13a5f3c882b8edd5beec87f0ef88fc76" exitCode=0 Nov 25 15:14:47 crc kubenswrapper[4731]: I1125 15:14:47.933119 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7eb67bcd-fcd6-45fe-bb98-bc5334032775" containerName="cinder-scheduler" containerID="cri-o://c25726465031a976692a06fbe53809567470f42f7fc5bb820ebda0c9bfdbf73d" gracePeriod=30 Nov 25 15:14:47 crc kubenswrapper[4731]: I1125 15:14:47.933209 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" event={"ID":"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc","Type":"ContainerDied","Data":"1016014fa83f77caa25d75cae216d0fd13a5f3c882b8edd5beec87f0ef88fc76"} Nov 25 15:14:47 crc kubenswrapper[4731]: I1125 15:14:47.933585 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7eb67bcd-fcd6-45fe-bb98-bc5334032775" containerName="probe" containerID="cri-o://0c1e987919213c8ce10fb99b2fd1546baba5a7ec8fbdff5a0e10886ee6198c7e" gracePeriod=30 Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.218035 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.379879 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.470173 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-dns-svc\") pod \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.470226 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-ovsdbserver-nb\") pod \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.470287 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-config\") pod \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.471074 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-dns-swift-storage-0\") pod \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.471098 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79vjl\" (UniqueName: \"kubernetes.io/projected/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-kube-api-access-79vjl\") pod \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.471150 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-ovsdbserver-sb\") pod \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\" (UID: \"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc\") " Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.500519 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-kube-api-access-79vjl" (OuterVolumeSpecName: "kube-api-access-79vjl") pod "b1a6ad1b-747e-4a9e-a567-ff881fa18cfc" (UID: "b1a6ad1b-747e-4a9e-a567-ff881fa18cfc"). InnerVolumeSpecName "kube-api-access-79vjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.518411 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "b1a6ad1b-747e-4a9e-a567-ff881fa18cfc" (UID: "b1a6ad1b-747e-4a9e-a567-ff881fa18cfc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.520652 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b1a6ad1b-747e-4a9e-a567-ff881fa18cfc" (UID: "b1a6ad1b-747e-4a9e-a567-ff881fa18cfc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.522296 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-config" (OuterVolumeSpecName: "config") pod "b1a6ad1b-747e-4a9e-a567-ff881fa18cfc" (UID: "b1a6ad1b-747e-4a9e-a567-ff881fa18cfc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.522365 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-65d859dcf6-bblhv" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.522406 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b1a6ad1b-747e-4a9e-a567-ff881fa18cfc" (UID: "b1a6ad1b-747e-4a9e-a567-ff881fa18cfc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.535920 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b1a6ad1b-747e-4a9e-a567-ff881fa18cfc" (UID: "b1a6ad1b-747e-4a9e-a567-ff881fa18cfc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.573736 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.573766 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.573777 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.573786 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.573796 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79vjl\" (UniqueName: \"kubernetes.io/projected/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-kube-api-access-79vjl\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.573805 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.955385 4731 generic.go:334] "Generic (PLEG): container finished" podID="7eb67bcd-fcd6-45fe-bb98-bc5334032775" containerID="0c1e987919213c8ce10fb99b2fd1546baba5a7ec8fbdff5a0e10886ee6198c7e" exitCode=0 Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.955476 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7eb67bcd-fcd6-45fe-bb98-bc5334032775","Type":"ContainerDied","Data":"0c1e987919213c8ce10fb99b2fd1546baba5a7ec8fbdff5a0e10886ee6198c7e"} Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.963686 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" event={"ID":"b1a6ad1b-747e-4a9e-a567-ff881fa18cfc","Type":"ContainerDied","Data":"63d0a5c079a9ea3c11cb4946bf2a770d370dd183e0b2364a21ccfd8c82d1623a"} Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.963738 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7b667979-gq9xr" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.963751 4731 scope.go:117] "RemoveContainer" containerID="1016014fa83f77caa25d75cae216d0fd13a5f3c882b8edd5beec87f0ef88fc76" Nov 25 15:14:48 crc kubenswrapper[4731]: I1125 15:14:48.990236 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gq9xr"] Nov 25 15:14:49 crc kubenswrapper[4731]: I1125 15:14:48.998465 4731 scope.go:117] "RemoveContainer" containerID="7600a7f68253d63dbf8cb0033e7a9c75321bb5c638b6389a50a1e1a513afe997" Nov 25 15:14:49 crc kubenswrapper[4731]: I1125 15:14:49.000360 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7b667979-gq9xr"] Nov 25 15:14:49 crc kubenswrapper[4731]: I1125 15:14:49.143207 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:49 crc kubenswrapper[4731]: I1125 15:14:49.153469 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-ccd9b5c94-l8cl7" Nov 25 15:14:49 crc kubenswrapper[4731]: I1125 15:14:49.542768 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-9dd67bbc8-dfsxx" podUID="3f2f3c8a-626e-442d-880e-3f6b6358c5f9" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:51616->10.217.0.156:9311: read: connection reset by peer" Nov 25 15:14:49 crc kubenswrapper[4731]: I1125 15:14:49.542798 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-9dd67bbc8-dfsxx" podUID="3f2f3c8a-626e-442d-880e-3f6b6358c5f9" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.156:9311/healthcheck\": read tcp 10.217.0.2:51618->10.217.0.156:9311: read: connection reset by peer" Nov 25 15:14:49 crc kubenswrapper[4731]: I1125 15:14:49.975261 4731 generic.go:334] "Generic (PLEG): container finished" podID="3f2f3c8a-626e-442d-880e-3f6b6358c5f9" containerID="bd77b646d49c55c07df0920b8e18ea4db9dc2420e8dd045d73e01f7ab50dba77" exitCode=0 Nov 25 15:14:49 crc kubenswrapper[4731]: I1125 15:14:49.975328 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9dd67bbc8-dfsxx" event={"ID":"3f2f3c8a-626e-442d-880e-3f6b6358c5f9","Type":"ContainerDied","Data":"bd77b646d49c55c07df0920b8e18ea4db9dc2420e8dd045d73e01f7ab50dba77"} Nov 25 15:14:49 crc kubenswrapper[4731]: I1125 15:14:49.975368 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-9dd67bbc8-dfsxx" event={"ID":"3f2f3c8a-626e-442d-880e-3f6b6358c5f9","Type":"ContainerDied","Data":"c51c998f463af6051c498ae9afc9d00be2716a45103ccaa6c4c2a8fd798ae8b1"} Nov 25 15:14:49 crc kubenswrapper[4731]: I1125 15:14:49.975378 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c51c998f463af6051c498ae9afc9d00be2716a45103ccaa6c4c2a8fd798ae8b1" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.038094 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 15:14:50 crc kubenswrapper[4731]: E1125 15:14:50.038617 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a6ad1b-747e-4a9e-a567-ff881fa18cfc" containerName="init" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.038633 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a6ad1b-747e-4a9e-a567-ff881fa18cfc" containerName="init" Nov 25 15:14:50 crc kubenswrapper[4731]: E1125 15:14:50.038644 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1a6ad1b-747e-4a9e-a567-ff881fa18cfc" containerName="dnsmasq-dns" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.038651 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1a6ad1b-747e-4a9e-a567-ff881fa18cfc" containerName="dnsmasq-dns" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.038870 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1a6ad1b-747e-4a9e-a567-ff881fa18cfc" containerName="dnsmasq-dns" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.039406 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.047471 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.097897 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.098123 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-config-data-custom\") pod \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.098226 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-logs\") pod \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.098269 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-combined-ca-bundle\") pod \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.098332 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfzhn\" (UniqueName: \"kubernetes.io/projected/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-kube-api-access-xfzhn\") pod \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.098377 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-config-data\") pod \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\" (UID: \"3f2f3c8a-626e-442d-880e-3f6b6358c5f9\") " Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.099289 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.099317 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-logs" (OuterVolumeSpecName: "logs") pod "3f2f3c8a-626e-442d-880e-3f6b6358c5f9" (UID: "3f2f3c8a-626e-442d-880e-3f6b6358c5f9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.100328 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-dp6hn" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.101956 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8815eebf-c8f8-49f9-94fb-5c3c454514d5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\") " pod="openstack/openstackclient" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.102053 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8815eebf-c8f8-49f9-94fb-5c3c454514d5-openstack-config-secret\") pod \"openstackclient\" (UID: \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\") " pod="openstack/openstackclient" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.102164 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8815eebf-c8f8-49f9-94fb-5c3c454514d5-openstack-config\") pod \"openstackclient\" (UID: \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\") " pod="openstack/openstackclient" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.102405 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wszdz\" (UniqueName: \"kubernetes.io/projected/8815eebf-c8f8-49f9-94fb-5c3c454514d5-kube-api-access-wszdz\") pod \"openstackclient\" (UID: \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\") " pod="openstack/openstackclient" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.104249 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.155122 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-kube-api-access-xfzhn" (OuterVolumeSpecName: "kube-api-access-xfzhn") pod "3f2f3c8a-626e-442d-880e-3f6b6358c5f9" (UID: "3f2f3c8a-626e-442d-880e-3f6b6358c5f9"). InnerVolumeSpecName "kube-api-access-xfzhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.185419 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3f2f3c8a-626e-442d-880e-3f6b6358c5f9" (UID: "3f2f3c8a-626e-442d-880e-3f6b6358c5f9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.187137 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.217367 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8815eebf-c8f8-49f9-94fb-5c3c454514d5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\") " pod="openstack/openstackclient" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.217475 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8815eebf-c8f8-49f9-94fb-5c3c454514d5-openstack-config-secret\") pod \"openstackclient\" (UID: \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\") " pod="openstack/openstackclient" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.218995 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8815eebf-c8f8-49f9-94fb-5c3c454514d5-openstack-config\") pod \"openstackclient\" (UID: \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\") " pod="openstack/openstackclient" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.219834 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8815eebf-c8f8-49f9-94fb-5c3c454514d5-openstack-config\") pod \"openstackclient\" (UID: \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\") " pod="openstack/openstackclient" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.220072 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wszdz\" (UniqueName: \"kubernetes.io/projected/8815eebf-c8f8-49f9-94fb-5c3c454514d5-kube-api-access-wszdz\") pod \"openstackclient\" (UID: \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\") " pod="openstack/openstackclient" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.224344 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8815eebf-c8f8-49f9-94fb-5c3c454514d5-openstack-config-secret\") pod \"openstackclient\" (UID: \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\") " pod="openstack/openstackclient" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.224297 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8815eebf-c8f8-49f9-94fb-5c3c454514d5-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\") " pod="openstack/openstackclient" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.225531 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfzhn\" (UniqueName: \"kubernetes.io/projected/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-kube-api-access-xfzhn\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.225563 4731 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.227891 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-config-data" (OuterVolumeSpecName: "config-data") pod "3f2f3c8a-626e-442d-880e-3f6b6358c5f9" (UID: "3f2f3c8a-626e-442d-880e-3f6b6358c5f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.230529 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3f2f3c8a-626e-442d-880e-3f6b6358c5f9" (UID: "3f2f3c8a-626e-442d-880e-3f6b6358c5f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.241166 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wszdz\" (UniqueName: \"kubernetes.io/projected/8815eebf-c8f8-49f9-94fb-5c3c454514d5-kube-api-access-wszdz\") pod \"openstackclient\" (UID: \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\") " pod="openstack/openstackclient" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.327140 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.327171 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f2f3c8a-626e-442d-880e-3f6b6358c5f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.357528 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.802456 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-545595f45c-m798f" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.823808 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1a6ad1b-747e-4a9e-a567-ff881fa18cfc" path="/var/lib/kubelet/pods/b1a6ad1b-747e-4a9e-a567-ff881fa18cfc/volumes" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.855277 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.879507 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cf794cbfb-89vbb"] Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.880269 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cf794cbfb-89vbb" podUID="87214202-a4d1-48c1-af50-b9eed07cab9d" containerName="neutron-api" containerID="cri-o://85b1ecf5ffe985766fb98c198382bdb0e5d3d5782175a6ffd6c2811214460267" gracePeriod=30 Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.880448 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6cf794cbfb-89vbb" podUID="87214202-a4d1-48c1-af50-b9eed07cab9d" containerName="neutron-httpd" containerID="cri-o://dc1b89295e73ed9b1d568c3af548142380b7b50601c3855a739bf64056d10206" gracePeriod=30 Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.985104 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-9dd67bbc8-dfsxx" Nov 25 15:14:50 crc kubenswrapper[4731]: I1125 15:14:50.985966 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8815eebf-c8f8-49f9-94fb-5c3c454514d5","Type":"ContainerStarted","Data":"b6a1c0a0f9ac0888d5bdf5b8930ce6485c9afabdb71e103a3fd405ae4b935ddf"} Nov 25 15:14:51 crc kubenswrapper[4731]: I1125 15:14:51.008461 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-9dd67bbc8-dfsxx"] Nov 25 15:14:51 crc kubenswrapper[4731]: I1125 15:14:51.023958 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-9dd67bbc8-dfsxx"] Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.021857 4731 generic.go:334] "Generic (PLEG): container finished" podID="87214202-a4d1-48c1-af50-b9eed07cab9d" containerID="dc1b89295e73ed9b1d568c3af548142380b7b50601c3855a739bf64056d10206" exitCode=0 Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.021929 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf794cbfb-89vbb" event={"ID":"87214202-a4d1-48c1-af50-b9eed07cab9d","Type":"ContainerDied","Data":"dc1b89295e73ed9b1d568c3af548142380b7b50601c3855a739bf64056d10206"} Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.032925 4731 generic.go:334] "Generic (PLEG): container finished" podID="7eb67bcd-fcd6-45fe-bb98-bc5334032775" containerID="c25726465031a976692a06fbe53809567470f42f7fc5bb820ebda0c9bfdbf73d" exitCode=0 Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.032981 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7eb67bcd-fcd6-45fe-bb98-bc5334032775","Type":"ContainerDied","Data":"c25726465031a976692a06fbe53809567470f42f7fc5bb820ebda0c9bfdbf73d"} Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.136043 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.266861 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7eb67bcd-fcd6-45fe-bb98-bc5334032775-etc-machine-id\") pod \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.266981 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-combined-ca-bundle\") pod \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.267041 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-scripts\") pod \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.267082 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-config-data-custom\") pod \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.267131 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gdk5\" (UniqueName: \"kubernetes.io/projected/7eb67bcd-fcd6-45fe-bb98-bc5334032775-kube-api-access-8gdk5\") pod \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.267199 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-config-data\") pod \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\" (UID: \"7eb67bcd-fcd6-45fe-bb98-bc5334032775\") " Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.268186 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7eb67bcd-fcd6-45fe-bb98-bc5334032775-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7eb67bcd-fcd6-45fe-bb98-bc5334032775" (UID: "7eb67bcd-fcd6-45fe-bb98-bc5334032775"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.274046 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7eb67bcd-fcd6-45fe-bb98-bc5334032775" (UID: "7eb67bcd-fcd6-45fe-bb98-bc5334032775"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.282543 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7eb67bcd-fcd6-45fe-bb98-bc5334032775-kube-api-access-8gdk5" (OuterVolumeSpecName: "kube-api-access-8gdk5") pod "7eb67bcd-fcd6-45fe-bb98-bc5334032775" (UID: "7eb67bcd-fcd6-45fe-bb98-bc5334032775"). InnerVolumeSpecName "kube-api-access-8gdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.282716 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-scripts" (OuterVolumeSpecName: "scripts") pod "7eb67bcd-fcd6-45fe-bb98-bc5334032775" (UID: "7eb67bcd-fcd6-45fe-bb98-bc5334032775"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.356522 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7eb67bcd-fcd6-45fe-bb98-bc5334032775" (UID: "7eb67bcd-fcd6-45fe-bb98-bc5334032775"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.369020 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.369049 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.369058 4731 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.369067 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gdk5\" (UniqueName: \"kubernetes.io/projected/7eb67bcd-fcd6-45fe-bb98-bc5334032775-kube-api-access-8gdk5\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.369079 4731 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7eb67bcd-fcd6-45fe-bb98-bc5334032775-etc-machine-id\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.376804 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-config-data" (OuterVolumeSpecName: "config-data") pod "7eb67bcd-fcd6-45fe-bb98-bc5334032775" (UID: "7eb67bcd-fcd6-45fe-bb98-bc5334032775"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.470633 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7eb67bcd-fcd6-45fe-bb98-bc5334032775-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:52 crc kubenswrapper[4731]: I1125 15:14:52.826728 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f2f3c8a-626e-442d-880e-3f6b6358c5f9" path="/var/lib/kubelet/pods/3f2f3c8a-626e-442d-880e-3f6b6358c5f9/volumes" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.067500 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7eb67bcd-fcd6-45fe-bb98-bc5334032775","Type":"ContainerDied","Data":"a507a53777793bfbf8a5162e0f72554f727df91ce3e7d8f16c45dc1054ab6d46"} Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.067553 4731 scope.go:117] "RemoveContainer" containerID="0c1e987919213c8ce10fb99b2fd1546baba5a7ec8fbdff5a0e10886ee6198c7e" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.067596 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.093981 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.105394 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.121558 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:14:53 crc kubenswrapper[4731]: E1125 15:14:53.122057 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2f3c8a-626e-442d-880e-3f6b6358c5f9" containerName="barbican-api" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.122078 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2f3c8a-626e-442d-880e-3f6b6358c5f9" containerName="barbican-api" Nov 25 15:14:53 crc kubenswrapper[4731]: E1125 15:14:53.122099 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f2f3c8a-626e-442d-880e-3f6b6358c5f9" containerName="barbican-api-log" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.122108 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f2f3c8a-626e-442d-880e-3f6b6358c5f9" containerName="barbican-api-log" Nov 25 15:14:53 crc kubenswrapper[4731]: E1125 15:14:53.122141 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb67bcd-fcd6-45fe-bb98-bc5334032775" containerName="cinder-scheduler" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.122150 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb67bcd-fcd6-45fe-bb98-bc5334032775" containerName="cinder-scheduler" Nov 25 15:14:53 crc kubenswrapper[4731]: E1125 15:14:53.122161 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7eb67bcd-fcd6-45fe-bb98-bc5334032775" containerName="probe" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.122169 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7eb67bcd-fcd6-45fe-bb98-bc5334032775" containerName="probe" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.122439 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f2f3c8a-626e-442d-880e-3f6b6358c5f9" containerName="barbican-api" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.122464 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb67bcd-fcd6-45fe-bb98-bc5334032775" containerName="probe" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.122477 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7eb67bcd-fcd6-45fe-bb98-bc5334032775" containerName="cinder-scheduler" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.122493 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f2f3c8a-626e-442d-880e-3f6b6358c5f9" containerName="barbican-api-log" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.123718 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.127366 4731 scope.go:117] "RemoveContainer" containerID="c25726465031a976692a06fbe53809567470f42f7fc5bb820ebda0c9bfdbf73d" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.127562 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.133247 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.197059 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23096767-3b8d-4153-8d68-0285b66b2b1b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.197106 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23096767-3b8d-4153-8d68-0285b66b2b1b-scripts\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.197144 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49hqn\" (UniqueName: \"kubernetes.io/projected/23096767-3b8d-4153-8d68-0285b66b2b1b-kube-api-access-49hqn\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.197202 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23096767-3b8d-4153-8d68-0285b66b2b1b-config-data\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.197252 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23096767-3b8d-4153-8d68-0285b66b2b1b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.197269 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23096767-3b8d-4153-8d68-0285b66b2b1b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.299514 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23096767-3b8d-4153-8d68-0285b66b2b1b-config-data\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.299592 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23096767-3b8d-4153-8d68-0285b66b2b1b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.299619 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23096767-3b8d-4153-8d68-0285b66b2b1b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.299668 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23096767-3b8d-4153-8d68-0285b66b2b1b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.299684 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23096767-3b8d-4153-8d68-0285b66b2b1b-scripts\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.299717 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49hqn\" (UniqueName: \"kubernetes.io/projected/23096767-3b8d-4153-8d68-0285b66b2b1b-kube-api-access-49hqn\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.299958 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/23096767-3b8d-4153-8d68-0285b66b2b1b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.305344 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23096767-3b8d-4153-8d68-0285b66b2b1b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.305789 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/23096767-3b8d-4153-8d68-0285b66b2b1b-config-data\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.309442 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/23096767-3b8d-4153-8d68-0285b66b2b1b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.309456 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/23096767-3b8d-4153-8d68-0285b66b2b1b-scripts\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.319239 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49hqn\" (UniqueName: \"kubernetes.io/projected/23096767-3b8d-4153-8d68-0285b66b2b1b-kube-api-access-49hqn\") pod \"cinder-scheduler-0\" (UID: \"23096767-3b8d-4153-8d68-0285b66b2b1b\") " pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.453715 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Nov 25 15:14:53 crc kubenswrapper[4731]: I1125 15:14:53.914292 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Nov 25 15:14:53 crc kubenswrapper[4731]: W1125 15:14:53.932199 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23096767_3b8d_4153_8d68_0285b66b2b1b.slice/crio-b6e55de11a536fd1766642bde18a45a79f3e906abcba629770b5614a32a58372 WatchSource:0}: Error finding container b6e55de11a536fd1766642bde18a45a79f3e906abcba629770b5614a32a58372: Status 404 returned error can't find the container with id b6e55de11a536fd1766642bde18a45a79f3e906abcba629770b5614a32a58372 Nov 25 15:14:54 crc kubenswrapper[4731]: I1125 15:14:54.087001 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"23096767-3b8d-4153-8d68-0285b66b2b1b","Type":"ContainerStarted","Data":"b6e55de11a536fd1766642bde18a45a79f3e906abcba629770b5614a32a58372"} Nov 25 15:14:54 crc kubenswrapper[4731]: I1125 15:14:54.489112 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Nov 25 15:14:54 crc kubenswrapper[4731]: I1125 15:14:54.824692 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7eb67bcd-fcd6-45fe-bb98-bc5334032775" path="/var/lib/kubelet/pods/7eb67bcd-fcd6-45fe-bb98-bc5334032775/volumes" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.111476 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"23096767-3b8d-4153-8d68-0285b66b2b1b","Type":"ContainerStarted","Data":"fcb616b23724d263a8f330f4356a2b1c15a3176015a6ecfb53b7ecbdfefd9b0c"} Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.120206 4731 generic.go:334] "Generic (PLEG): container finished" podID="87214202-a4d1-48c1-af50-b9eed07cab9d" containerID="85b1ecf5ffe985766fb98c198382bdb0e5d3d5782175a6ffd6c2811214460267" exitCode=0 Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.120260 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf794cbfb-89vbb" event={"ID":"87214202-a4d1-48c1-af50-b9eed07cab9d","Type":"ContainerDied","Data":"85b1ecf5ffe985766fb98c198382bdb0e5d3d5782175a6ffd6c2811214460267"} Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.185033 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.259951 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-httpd-config\") pod \"87214202-a4d1-48c1-af50-b9eed07cab9d\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.260032 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-config\") pod \"87214202-a4d1-48c1-af50-b9eed07cab9d\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.260123 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-ovndb-tls-certs\") pod \"87214202-a4d1-48c1-af50-b9eed07cab9d\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.260152 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-combined-ca-bundle\") pod \"87214202-a4d1-48c1-af50-b9eed07cab9d\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.267194 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcdkm\" (UniqueName: \"kubernetes.io/projected/87214202-a4d1-48c1-af50-b9eed07cab9d-kube-api-access-vcdkm\") pod \"87214202-a4d1-48c1-af50-b9eed07cab9d\" (UID: \"87214202-a4d1-48c1-af50-b9eed07cab9d\") " Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.271367 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "87214202-a4d1-48c1-af50-b9eed07cab9d" (UID: "87214202-a4d1-48c1-af50-b9eed07cab9d"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.293866 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-85b87f8dc-297hh"] Nov 25 15:14:55 crc kubenswrapper[4731]: E1125 15:14:55.294403 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87214202-a4d1-48c1-af50-b9eed07cab9d" containerName="neutron-api" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.294428 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="87214202-a4d1-48c1-af50-b9eed07cab9d" containerName="neutron-api" Nov 25 15:14:55 crc kubenswrapper[4731]: E1125 15:14:55.294462 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87214202-a4d1-48c1-af50-b9eed07cab9d" containerName="neutron-httpd" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.294477 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="87214202-a4d1-48c1-af50-b9eed07cab9d" containerName="neutron-httpd" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.294661 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="87214202-a4d1-48c1-af50-b9eed07cab9d" containerName="neutron-api" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.294687 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="87214202-a4d1-48c1-af50-b9eed07cab9d" containerName="neutron-httpd" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.295305 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87214202-a4d1-48c1-af50-b9eed07cab9d-kube-api-access-vcdkm" (OuterVolumeSpecName: "kube-api-access-vcdkm") pod "87214202-a4d1-48c1-af50-b9eed07cab9d" (UID: "87214202-a4d1-48c1-af50-b9eed07cab9d"). InnerVolumeSpecName "kube-api-access-vcdkm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.307095 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.312037 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.312184 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.312245 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.329173 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85b87f8dc-297hh"] Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.369175 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a662040-fd64-4441-86b9-9fc355a75142-public-tls-certs\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.369224 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a662040-fd64-4441-86b9-9fc355a75142-combined-ca-bundle\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.369640 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5a662040-fd64-4441-86b9-9fc355a75142-etc-swift\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.370765 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a662040-fd64-4441-86b9-9fc355a75142-log-httpd\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.370847 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a662040-fd64-4441-86b9-9fc355a75142-internal-tls-certs\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.371039 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a662040-fd64-4441-86b9-9fc355a75142-run-httpd\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.371100 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a662040-fd64-4441-86b9-9fc355a75142-config-data\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.371211 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvcpw\" (UniqueName: \"kubernetes.io/projected/5a662040-fd64-4441-86b9-9fc355a75142-kube-api-access-gvcpw\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.371317 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcdkm\" (UniqueName: \"kubernetes.io/projected/87214202-a4d1-48c1-af50-b9eed07cab9d-kube-api-access-vcdkm\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.371333 4731 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-httpd-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.404252 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.405143 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="ceilometer-central-agent" containerID="cri-o://6a9167a272110e6418746593fdeee614209fc2aa77060e663aa897b38d286125" gracePeriod=30 Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.405193 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="sg-core" containerID="cri-o://981952b611751d2a4d905c6db56ef9324c0cae6c74ec8685ba3623cdcb9057da" gracePeriod=30 Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.405250 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="ceilometer-notification-agent" containerID="cri-o://7225a168f9acddc98980b21367a098dd8279b971d46631aaa577fe1b52d22c59" gracePeriod=30 Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.405294 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="proxy-httpd" containerID="cri-o://854eb8894fbbf4fdaf45ae7258400f4b64c8926e7124e8730fd3adaa578df932" gracePeriod=30 Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.418197 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "87214202-a4d1-48c1-af50-b9eed07cab9d" (UID: "87214202-a4d1-48c1-af50-b9eed07cab9d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.447004 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.451084 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-config" (OuterVolumeSpecName: "config") pod "87214202-a4d1-48c1-af50-b9eed07cab9d" (UID: "87214202-a4d1-48c1-af50-b9eed07cab9d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.452017 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "87214202-a4d1-48c1-af50-b9eed07cab9d" (UID: "87214202-a4d1-48c1-af50-b9eed07cab9d"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.472783 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a662040-fd64-4441-86b9-9fc355a75142-config-data\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.472919 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvcpw\" (UniqueName: \"kubernetes.io/projected/5a662040-fd64-4441-86b9-9fc355a75142-kube-api-access-gvcpw\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.473801 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a662040-fd64-4441-86b9-9fc355a75142-public-tls-certs\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.473846 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a662040-fd64-4441-86b9-9fc355a75142-combined-ca-bundle\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.474964 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5a662040-fd64-4441-86b9-9fc355a75142-etc-swift\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.475337 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a662040-fd64-4441-86b9-9fc355a75142-log-httpd\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.475408 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a662040-fd64-4441-86b9-9fc355a75142-internal-tls-certs\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.475983 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a662040-fd64-4441-86b9-9fc355a75142-log-httpd\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.477029 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a662040-fd64-4441-86b9-9fc355a75142-config-data\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.478214 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a662040-fd64-4441-86b9-9fc355a75142-internal-tls-certs\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.478612 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a662040-fd64-4441-86b9-9fc355a75142-public-tls-certs\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.481536 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5a662040-fd64-4441-86b9-9fc355a75142-etc-swift\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.481866 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a662040-fd64-4441-86b9-9fc355a75142-combined-ca-bundle\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.481982 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a662040-fd64-4441-86b9-9fc355a75142-run-httpd\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.482118 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.482136 4731 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.482148 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/87214202-a4d1-48c1-af50-b9eed07cab9d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.482387 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a662040-fd64-4441-86b9-9fc355a75142-run-httpd\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.488100 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvcpw\" (UniqueName: \"kubernetes.io/projected/5a662040-fd64-4441-86b9-9fc355a75142-kube-api-access-gvcpw\") pod \"swift-proxy-85b87f8dc-297hh\" (UID: \"5a662040-fd64-4441-86b9-9fc355a75142\") " pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:55 crc kubenswrapper[4731]: I1125 15:14:55.557492 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.146698 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-85b87f8dc-297hh"] Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.151463 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6cf794cbfb-89vbb" event={"ID":"87214202-a4d1-48c1-af50-b9eed07cab9d","Type":"ContainerDied","Data":"52c5964791e66bc766b78fa921e30809b0f55f910f482e84c9f11e7c76b06f6c"} Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.151509 4731 scope.go:117] "RemoveContainer" containerID="dc1b89295e73ed9b1d568c3af548142380b7b50601c3855a739bf64056d10206" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.151697 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6cf794cbfb-89vbb" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.164011 4731 generic.go:334] "Generic (PLEG): container finished" podID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerID="854eb8894fbbf4fdaf45ae7258400f4b64c8926e7124e8730fd3adaa578df932" exitCode=0 Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.164275 4731 generic.go:334] "Generic (PLEG): container finished" podID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerID="981952b611751d2a4d905c6db56ef9324c0cae6c74ec8685ba3623cdcb9057da" exitCode=2 Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.164285 4731 generic.go:334] "Generic (PLEG): container finished" podID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerID="6a9167a272110e6418746593fdeee614209fc2aa77060e663aa897b38d286125" exitCode=0 Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.164349 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c","Type":"ContainerDied","Data":"854eb8894fbbf4fdaf45ae7258400f4b64c8926e7124e8730fd3adaa578df932"} Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.164373 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c","Type":"ContainerDied","Data":"981952b611751d2a4d905c6db56ef9324c0cae6c74ec8685ba3623cdcb9057da"} Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.164382 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c","Type":"ContainerDied","Data":"6a9167a272110e6418746593fdeee614209fc2aa77060e663aa897b38d286125"} Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.166370 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"23096767-3b8d-4153-8d68-0285b66b2b1b","Type":"ContainerStarted","Data":"c42b18ecf27f3a63ffc66993b0f834cbf1c99b55207a34ae1f53716e5d20b624"} Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.190959 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6cf794cbfb-89vbb"] Nov 25 15:14:56 crc kubenswrapper[4731]: W1125 15:14:56.194251 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a662040_fd64_4441_86b9_9fc355a75142.slice/crio-01ea66ac541411874ee20a8176e82ccaded60431459078cddb8c8260a21e32d0 WatchSource:0}: Error finding container 01ea66ac541411874ee20a8176e82ccaded60431459078cddb8c8260a21e32d0: Status 404 returned error can't find the container with id 01ea66ac541411874ee20a8176e82ccaded60431459078cddb8c8260a21e32d0 Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.210167 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6cf794cbfb-89vbb"] Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.214990 4731 scope.go:117] "RemoveContainer" containerID="85b1ecf5ffe985766fb98c198382bdb0e5d3d5782175a6ffd6c2811214460267" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.217622 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.217595462 podStartE2EDuration="3.217595462s" podCreationTimestamp="2025-11-25 15:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:56.194193992 +0000 UTC m=+1254.941328422" watchObservedRunningTime="2025-11-25 15:14:56.217595462 +0000 UTC m=+1254.964729892" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.509934 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5f66fcd449-6b7kw"] Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.511416 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.515167 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.516872 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-4zk5x" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.517947 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.533812 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5f66fcd449-6b7kw"] Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.611612 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-config-data\") pod \"heat-engine-5f66fcd449-6b7kw\" (UID: \"069c92d4-3268-4960-8aaf-e071a962a219\") " pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.611974 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-combined-ca-bundle\") pod \"heat-engine-5f66fcd449-6b7kw\" (UID: \"069c92d4-3268-4960-8aaf-e071a962a219\") " pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.612138 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwdsj\" (UniqueName: \"kubernetes.io/projected/069c92d4-3268-4960-8aaf-e071a962a219-kube-api-access-kwdsj\") pod \"heat-engine-5f66fcd449-6b7kw\" (UID: \"069c92d4-3268-4960-8aaf-e071a962a219\") " pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.612361 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-config-data-custom\") pod \"heat-engine-5f66fcd449-6b7kw\" (UID: \"069c92d4-3268-4960-8aaf-e071a962a219\") " pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.673178 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-nc8dk"] Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.677502 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.714379 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwdsj\" (UniqueName: \"kubernetes.io/projected/069c92d4-3268-4960-8aaf-e071a962a219-kube-api-access-kwdsj\") pod \"heat-engine-5f66fcd449-6b7kw\" (UID: \"069c92d4-3268-4960-8aaf-e071a962a219\") " pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.714442 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.714495 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.714528 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-config-data-custom\") pod \"heat-engine-5f66fcd449-6b7kw\" (UID: \"069c92d4-3268-4960-8aaf-e071a962a219\") " pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.714556 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.714593 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-config-data\") pod \"heat-engine-5f66fcd449-6b7kw\" (UID: \"069c92d4-3268-4960-8aaf-e071a962a219\") " pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.714616 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fttrx\" (UniqueName: \"kubernetes.io/projected/eafdd6d2-3514-4de8-afc5-651d22a486a4-kube-api-access-fttrx\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.714637 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-combined-ca-bundle\") pod \"heat-engine-5f66fcd449-6b7kw\" (UID: \"069c92d4-3268-4960-8aaf-e071a962a219\") " pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.714684 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.714706 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-config\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.725454 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-nc8dk"] Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.735044 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-combined-ca-bundle\") pod \"heat-engine-5f66fcd449-6b7kw\" (UID: \"069c92d4-3268-4960-8aaf-e071a962a219\") " pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.735422 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-config-data-custom\") pod \"heat-engine-5f66fcd449-6b7kw\" (UID: \"069c92d4-3268-4960-8aaf-e071a962a219\") " pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.753872 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwdsj\" (UniqueName: \"kubernetes.io/projected/069c92d4-3268-4960-8aaf-e071a962a219-kube-api-access-kwdsj\") pod \"heat-engine-5f66fcd449-6b7kw\" (UID: \"069c92d4-3268-4960-8aaf-e071a962a219\") " pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.758118 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-config-data\") pod \"heat-engine-5f66fcd449-6b7kw\" (UID: \"069c92d4-3268-4960-8aaf-e071a962a219\") " pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.790183 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-6c9d7769bd-zmnpm"] Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.792906 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.796389 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.811726 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-ffdfb7796-4v5j9"] Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.814936 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.816838 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.816883 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-config\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.816918 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-config-data\") pod \"heat-api-6c9d7769bd-zmnpm\" (UID: \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\") " pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.816950 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-combined-ca-bundle\") pod \"heat-api-6c9d7769bd-zmnpm\" (UID: \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\") " pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.816989 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.817050 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.817108 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.817161 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fttrx\" (UniqueName: \"kubernetes.io/projected/eafdd6d2-3514-4de8-afc5-651d22a486a4-kube-api-access-fttrx\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.817205 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-config-data-custom\") pod \"heat-api-6c9d7769bd-zmnpm\" (UID: \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\") " pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.817243 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8f2t\" (UniqueName: \"kubernetes.io/projected/53a766a8-f8e4-4d2d-8202-e242a2e70d61-kube-api-access-b8f2t\") pod \"heat-api-6c9d7769bd-zmnpm\" (UID: \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\") " pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.817936 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-config\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.818476 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.818628 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.818641 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.819418 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.830992 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.842332 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87214202-a4d1-48c1-af50-b9eed07cab9d" path="/var/lib/kubelet/pods/87214202-a4d1-48c1-af50-b9eed07cab9d/volumes" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.843564 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fttrx\" (UniqueName: \"kubernetes.io/projected/eafdd6d2-3514-4de8-afc5-651d22a486a4-kube-api-access-fttrx\") pod \"dnsmasq-dns-688b9f5b49-nc8dk\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.845742 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c9d7769bd-zmnpm"] Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.864892 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.865551 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-ffdfb7796-4v5j9"] Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.920972 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-config-data-custom\") pod \"heat-cfnapi-ffdfb7796-4v5j9\" (UID: \"81a541bc-b19f-488c-9d8d-2c48663b4033\") " pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.921365 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-config-data-custom\") pod \"heat-api-6c9d7769bd-zmnpm\" (UID: \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\") " pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.921449 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8f2t\" (UniqueName: \"kubernetes.io/projected/53a766a8-f8e4-4d2d-8202-e242a2e70d61-kube-api-access-b8f2t\") pod \"heat-api-6c9d7769bd-zmnpm\" (UID: \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\") " pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.921536 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-combined-ca-bundle\") pod \"heat-cfnapi-ffdfb7796-4v5j9\" (UID: \"81a541bc-b19f-488c-9d8d-2c48663b4033\") " pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.921606 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-config-data\") pod \"heat-api-6c9d7769bd-zmnpm\" (UID: \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\") " pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.921645 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-combined-ca-bundle\") pod \"heat-api-6c9d7769bd-zmnpm\" (UID: \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\") " pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.921791 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-config-data\") pod \"heat-cfnapi-ffdfb7796-4v5j9\" (UID: \"81a541bc-b19f-488c-9d8d-2c48663b4033\") " pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.921908 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8f67\" (UniqueName: \"kubernetes.io/projected/81a541bc-b19f-488c-9d8d-2c48663b4033-kube-api-access-n8f67\") pod \"heat-cfnapi-ffdfb7796-4v5j9\" (UID: \"81a541bc-b19f-488c-9d8d-2c48663b4033\") " pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.927784 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-config-data\") pod \"heat-api-6c9d7769bd-zmnpm\" (UID: \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\") " pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.929844 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-combined-ca-bundle\") pod \"heat-api-6c9d7769bd-zmnpm\" (UID: \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\") " pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.936291 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-config-data-custom\") pod \"heat-api-6c9d7769bd-zmnpm\" (UID: \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\") " pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:14:56 crc kubenswrapper[4731]: I1125 15:14:56.940494 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8f2t\" (UniqueName: \"kubernetes.io/projected/53a766a8-f8e4-4d2d-8202-e242a2e70d61-kube-api-access-b8f2t\") pod \"heat-api-6c9d7769bd-zmnpm\" (UID: \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\") " pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.023247 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-combined-ca-bundle\") pod \"heat-cfnapi-ffdfb7796-4v5j9\" (UID: \"81a541bc-b19f-488c-9d8d-2c48663b4033\") " pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.023334 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-config-data\") pod \"heat-cfnapi-ffdfb7796-4v5j9\" (UID: \"81a541bc-b19f-488c-9d8d-2c48663b4033\") " pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.023365 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8f67\" (UniqueName: \"kubernetes.io/projected/81a541bc-b19f-488c-9d8d-2c48663b4033-kube-api-access-n8f67\") pod \"heat-cfnapi-ffdfb7796-4v5j9\" (UID: \"81a541bc-b19f-488c-9d8d-2c48663b4033\") " pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.023392 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-config-data-custom\") pod \"heat-cfnapi-ffdfb7796-4v5j9\" (UID: \"81a541bc-b19f-488c-9d8d-2c48663b4033\") " pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.029788 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-config-data\") pod \"heat-cfnapi-ffdfb7796-4v5j9\" (UID: \"81a541bc-b19f-488c-9d8d-2c48663b4033\") " pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.030148 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-combined-ca-bundle\") pod \"heat-cfnapi-ffdfb7796-4v5j9\" (UID: \"81a541bc-b19f-488c-9d8d-2c48663b4033\") " pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.031164 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-config-data-custom\") pod \"heat-cfnapi-ffdfb7796-4v5j9\" (UID: \"81a541bc-b19f-488c-9d8d-2c48663b4033\") " pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.042500 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8f67\" (UniqueName: \"kubernetes.io/projected/81a541bc-b19f-488c-9d8d-2c48663b4033-kube-api-access-n8f67\") pod \"heat-cfnapi-ffdfb7796-4v5j9\" (UID: \"81a541bc-b19f-488c-9d8d-2c48663b4033\") " pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.104869 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.131630 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.168456 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.190867 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b87f8dc-297hh" event={"ID":"5a662040-fd64-4441-86b9-9fc355a75142","Type":"ContainerStarted","Data":"041026b26bd418e7b253fa77c8c2e850e52faa2a1b026aeb560b20aeca66339f"} Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.190911 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b87f8dc-297hh" event={"ID":"5a662040-fd64-4441-86b9-9fc355a75142","Type":"ContainerStarted","Data":"c15ac650cd14f1aac30c29128d64506d93d7d3e3872774f9b1d6f28158fb6916"} Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.190925 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-85b87f8dc-297hh" event={"ID":"5a662040-fd64-4441-86b9-9fc355a75142","Type":"ContainerStarted","Data":"01ea66ac541411874ee20a8176e82ccaded60431459078cddb8c8260a21e32d0"} Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.191024 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.191049 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:14:57 crc kubenswrapper[4731]: I1125 15:14:57.221772 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-85b87f8dc-297hh" podStartSLOduration=2.22175105 podStartE2EDuration="2.22175105s" podCreationTimestamp="2025-11-25 15:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:14:57.217257196 +0000 UTC m=+1255.964391626" watchObservedRunningTime="2025-11-25 15:14:57.22175105 +0000 UTC m=+1255.968885480" Nov 25 15:14:58 crc kubenswrapper[4731]: I1125 15:14:58.385779 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:14:58 crc kubenswrapper[4731]: I1125 15:14:58.387104 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4b46ae25-5a68-4181-a69e-c1620fd2c060" containerName="glance-log" containerID="cri-o://a71b6d53b34fdc67921eda79cab827a4e9d70caae43bb9281ab73bd2a115d7cc" gracePeriod=30 Nov 25 15:14:58 crc kubenswrapper[4731]: I1125 15:14:58.387179 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4b46ae25-5a68-4181-a69e-c1620fd2c060" containerName="glance-httpd" containerID="cri-o://8452b379fa5bb1295fb1ef4d48747516b1ec0bbab83e1bf933c0805f068148cb" gracePeriod=30 Nov 25 15:14:58 crc kubenswrapper[4731]: I1125 15:14:58.454628 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Nov 25 15:14:59 crc kubenswrapper[4731]: I1125 15:14:59.223896 4731 generic.go:334] "Generic (PLEG): container finished" podID="4b46ae25-5a68-4181-a69e-c1620fd2c060" containerID="a71b6d53b34fdc67921eda79cab827a4e9d70caae43bb9281ab73bd2a115d7cc" exitCode=143 Nov 25 15:14:59 crc kubenswrapper[4731]: I1125 15:14:59.223992 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b46ae25-5a68-4181-a69e-c1620fd2c060","Type":"ContainerDied","Data":"a71b6d53b34fdc67921eda79cab827a4e9d70caae43bb9281ab73bd2a115d7cc"} Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.136972 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn"] Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.138487 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn" Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.140571 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.140571 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.148099 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn"] Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.239092 4731 generic.go:334] "Generic (PLEG): container finished" podID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerID="7225a168f9acddc98980b21367a098dd8279b971d46631aaa577fe1b52d22c59" exitCode=0 Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.239151 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c","Type":"ContainerDied","Data":"7225a168f9acddc98980b21367a098dd8279b971d46631aaa577fe1b52d22c59"} Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.299897 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-config-volume\") pod \"collect-profiles-29401395-lwqjn\" (UID: \"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn" Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.299976 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqfvn\" (UniqueName: \"kubernetes.io/projected/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-kube-api-access-wqfvn\") pod \"collect-profiles-29401395-lwqjn\" (UID: \"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn" Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.300011 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-secret-volume\") pod \"collect-profiles-29401395-lwqjn\" (UID: \"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn" Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.401280 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-config-volume\") pod \"collect-profiles-29401395-lwqjn\" (UID: \"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn" Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.401791 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqfvn\" (UniqueName: \"kubernetes.io/projected/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-kube-api-access-wqfvn\") pod \"collect-profiles-29401395-lwqjn\" (UID: \"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn" Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.401896 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-secret-volume\") pod \"collect-profiles-29401395-lwqjn\" (UID: \"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn" Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.405762 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-config-volume\") pod \"collect-profiles-29401395-lwqjn\" (UID: \"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn" Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.410657 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-secret-volume\") pod \"collect-profiles-29401395-lwqjn\" (UID: \"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn" Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.420783 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqfvn\" (UniqueName: \"kubernetes.io/projected/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-kube-api-access-wqfvn\") pod \"collect-profiles-29401395-lwqjn\" (UID: \"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn" Nov 25 15:15:00 crc kubenswrapper[4731]: I1125 15:15:00.471929 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn" Nov 25 15:15:02 crc kubenswrapper[4731]: I1125 15:15:02.273105 4731 generic.go:334] "Generic (PLEG): container finished" podID="4b46ae25-5a68-4181-a69e-c1620fd2c060" containerID="8452b379fa5bb1295fb1ef4d48747516b1ec0bbab83e1bf933c0805f068148cb" exitCode=0 Nov 25 15:15:02 crc kubenswrapper[4731]: I1125 15:15:02.273163 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b46ae25-5a68-4181-a69e-c1620fd2c060","Type":"ContainerDied","Data":"8452b379fa5bb1295fb1ef4d48747516b1ec0bbab83e1bf933c0805f068148cb"} Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.052334 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-574677c86-dzlj9"] Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.054077 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-574677c86-dzlj9" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.075120 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-69fd4bd965-vmt4d"] Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.076295 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.085395 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-574677c86-dzlj9"] Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.095202 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-69fd4bd965-vmt4d"] Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.136422 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-549496ffb7-8jt8x"] Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.137524 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.153209 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21056706-0226-48f6-bbe2-5a8c5fb49c7d-combined-ca-bundle\") pod \"heat-engine-574677c86-dzlj9\" (UID: \"21056706-0226-48f6-bbe2-5a8c5fb49c7d\") " pod="openstack/heat-engine-574677c86-dzlj9" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.153355 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgtkc\" (UniqueName: \"kubernetes.io/projected/21056706-0226-48f6-bbe2-5a8c5fb49c7d-kube-api-access-kgtkc\") pod \"heat-engine-574677c86-dzlj9\" (UID: \"21056706-0226-48f6-bbe2-5a8c5fb49c7d\") " pod="openstack/heat-engine-574677c86-dzlj9" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.153416 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21056706-0226-48f6-bbe2-5a8c5fb49c7d-config-data-custom\") pod \"heat-engine-574677c86-dzlj9\" (UID: \"21056706-0226-48f6-bbe2-5a8c5fb49c7d\") " pod="openstack/heat-engine-574677c86-dzlj9" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.153437 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21056706-0226-48f6-bbe2-5a8c5fb49c7d-config-data\") pod \"heat-engine-574677c86-dzlj9\" (UID: \"21056706-0226-48f6-bbe2-5a8c5fb49c7d\") " pod="openstack/heat-engine-574677c86-dzlj9" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.158874 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-549496ffb7-8jt8x"] Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.264182 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-config-data\") pod \"heat-api-69fd4bd965-vmt4d\" (UID: \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\") " pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.264252 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgtkc\" (UniqueName: \"kubernetes.io/projected/21056706-0226-48f6-bbe2-5a8c5fb49c7d-kube-api-access-kgtkc\") pod \"heat-engine-574677c86-dzlj9\" (UID: \"21056706-0226-48f6-bbe2-5a8c5fb49c7d\") " pod="openstack/heat-engine-574677c86-dzlj9" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.264273 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-config-data-custom\") pod \"heat-cfnapi-549496ffb7-8jt8x\" (UID: \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\") " pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.264299 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v64fc\" (UniqueName: \"kubernetes.io/projected/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-kube-api-access-v64fc\") pod \"heat-api-69fd4bd965-vmt4d\" (UID: \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\") " pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.264317 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-combined-ca-bundle\") pod \"heat-api-69fd4bd965-vmt4d\" (UID: \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\") " pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.264343 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-config-data-custom\") pod \"heat-api-69fd4bd965-vmt4d\" (UID: \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\") " pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.264369 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-combined-ca-bundle\") pod \"heat-cfnapi-549496ffb7-8jt8x\" (UID: \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\") " pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.264386 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21056706-0226-48f6-bbe2-5a8c5fb49c7d-config-data-custom\") pod \"heat-engine-574677c86-dzlj9\" (UID: \"21056706-0226-48f6-bbe2-5a8c5fb49c7d\") " pod="openstack/heat-engine-574677c86-dzlj9" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.264401 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms6n2\" (UniqueName: \"kubernetes.io/projected/94fb83b0-f12c-45ee-bae8-058c7b59d28f-kube-api-access-ms6n2\") pod \"heat-cfnapi-549496ffb7-8jt8x\" (UID: \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\") " pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.264419 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21056706-0226-48f6-bbe2-5a8c5fb49c7d-config-data\") pod \"heat-engine-574677c86-dzlj9\" (UID: \"21056706-0226-48f6-bbe2-5a8c5fb49c7d\") " pod="openstack/heat-engine-574677c86-dzlj9" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.264448 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21056706-0226-48f6-bbe2-5a8c5fb49c7d-combined-ca-bundle\") pod \"heat-engine-574677c86-dzlj9\" (UID: \"21056706-0226-48f6-bbe2-5a8c5fb49c7d\") " pod="openstack/heat-engine-574677c86-dzlj9" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.264492 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-config-data\") pod \"heat-cfnapi-549496ffb7-8jt8x\" (UID: \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\") " pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.270764 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/21056706-0226-48f6-bbe2-5a8c5fb49c7d-config-data-custom\") pod \"heat-engine-574677c86-dzlj9\" (UID: \"21056706-0226-48f6-bbe2-5a8c5fb49c7d\") " pod="openstack/heat-engine-574677c86-dzlj9" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.271847 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21056706-0226-48f6-bbe2-5a8c5fb49c7d-config-data\") pod \"heat-engine-574677c86-dzlj9\" (UID: \"21056706-0226-48f6-bbe2-5a8c5fb49c7d\") " pod="openstack/heat-engine-574677c86-dzlj9" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.273324 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21056706-0226-48f6-bbe2-5a8c5fb49c7d-combined-ca-bundle\") pod \"heat-engine-574677c86-dzlj9\" (UID: \"21056706-0226-48f6-bbe2-5a8c5fb49c7d\") " pod="openstack/heat-engine-574677c86-dzlj9" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.304437 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgtkc\" (UniqueName: \"kubernetes.io/projected/21056706-0226-48f6-bbe2-5a8c5fb49c7d-kube-api-access-kgtkc\") pod \"heat-engine-574677c86-dzlj9\" (UID: \"21056706-0226-48f6-bbe2-5a8c5fb49c7d\") " pod="openstack/heat-engine-574677c86-dzlj9" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.365963 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-config-data\") pod \"heat-cfnapi-549496ffb7-8jt8x\" (UID: \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\") " pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.366076 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-config-data\") pod \"heat-api-69fd4bd965-vmt4d\" (UID: \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\") " pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.366115 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-config-data-custom\") pod \"heat-cfnapi-549496ffb7-8jt8x\" (UID: \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\") " pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.366136 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v64fc\" (UniqueName: \"kubernetes.io/projected/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-kube-api-access-v64fc\") pod \"heat-api-69fd4bd965-vmt4d\" (UID: \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\") " pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.366156 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-combined-ca-bundle\") pod \"heat-api-69fd4bd965-vmt4d\" (UID: \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\") " pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.366181 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-config-data-custom\") pod \"heat-api-69fd4bd965-vmt4d\" (UID: \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\") " pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.366206 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-combined-ca-bundle\") pod \"heat-cfnapi-549496ffb7-8jt8x\" (UID: \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\") " pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.366223 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms6n2\" (UniqueName: \"kubernetes.io/projected/94fb83b0-f12c-45ee-bae8-058c7b59d28f-kube-api-access-ms6n2\") pod \"heat-cfnapi-549496ffb7-8jt8x\" (UID: \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\") " pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.373596 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-config-data-custom\") pod \"heat-api-69fd4bd965-vmt4d\" (UID: \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\") " pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.374301 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-config-data-custom\") pod \"heat-cfnapi-549496ffb7-8jt8x\" (UID: \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\") " pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.375136 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-config-data\") pod \"heat-api-69fd4bd965-vmt4d\" (UID: \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\") " pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.375689 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-combined-ca-bundle\") pod \"heat-api-69fd4bd965-vmt4d\" (UID: \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\") " pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.376516 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-config-data\") pod \"heat-cfnapi-549496ffb7-8jt8x\" (UID: \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\") " pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.378404 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-combined-ca-bundle\") pod \"heat-cfnapi-549496ffb7-8jt8x\" (UID: \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\") " pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.384353 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms6n2\" (UniqueName: \"kubernetes.io/projected/94fb83b0-f12c-45ee-bae8-058c7b59d28f-kube-api-access-ms6n2\") pod \"heat-cfnapi-549496ffb7-8jt8x\" (UID: \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\") " pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.384886 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v64fc\" (UniqueName: \"kubernetes.io/projected/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-kube-api-access-v64fc\") pod \"heat-api-69fd4bd965-vmt4d\" (UID: \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\") " pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.403254 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-574677c86-dzlj9" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.414363 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.456507 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:03 crc kubenswrapper[4731]: I1125 15:15:03.704745 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.296343 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.313353 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c","Type":"ContainerDied","Data":"d11fff7a6c10849745d713ced2b910ab0e9aea70bc4f5507d71e01d8018b6214"} Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.313404 4731 scope.go:117] "RemoveContainer" containerID="854eb8894fbbf4fdaf45ae7258400f4b64c8926e7124e8730fd3adaa578df932" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.313570 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.413573 4731 scope.go:117] "RemoveContainer" containerID="981952b611751d2a4d905c6db56ef9324c0cae6c74ec8685ba3623cdcb9057da" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.492187 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hx67\" (UniqueName: \"kubernetes.io/projected/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-kube-api-access-4hx67\") pod \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.492246 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-log-httpd\") pod \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.492613 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-sg-core-conf-yaml\") pod \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.492850 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-combined-ca-bundle\") pod \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.492885 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-scripts\") pod \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.492937 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-config-data\") pod \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.492960 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-run-httpd\") pod \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\" (UID: \"5d30f6bd-f2f7-4edb-a26a-a63d92e2414c\") " Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.493385 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" (UID: "5d30f6bd-f2f7-4edb-a26a-a63d92e2414c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.493998 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" (UID: "5d30f6bd-f2f7-4edb-a26a-a63d92e2414c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.494168 4731 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.494186 4731 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.503507 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-scripts" (OuterVolumeSpecName: "scripts") pod "5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" (UID: "5d30f6bd-f2f7-4edb-a26a-a63d92e2414c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.503538 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-kube-api-access-4hx67" (OuterVolumeSpecName: "kube-api-access-4hx67") pod "5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" (UID: "5d30f6bd-f2f7-4edb-a26a-a63d92e2414c"). InnerVolumeSpecName "kube-api-access-4hx67". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.534367 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" (UID: "5d30f6bd-f2f7-4edb-a26a-a63d92e2414c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.585017 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" (UID: "5d30f6bd-f2f7-4edb-a26a-a63d92e2414c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.596783 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.596868 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.596899 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hx67\" (UniqueName: \"kubernetes.io/projected/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-kube-api-access-4hx67\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.596928 4731 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.619354 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-config-data" (OuterVolumeSpecName: "config-data") pod "5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" (UID: "5d30f6bd-f2f7-4edb-a26a-a63d92e2414c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.692488 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.698136 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.708520 4731 scope.go:117] "RemoveContainer" containerID="7225a168f9acddc98980b21367a098dd8279b971d46631aaa577fe1b52d22c59" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.714267 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.714519 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="51de4e1c-41f4-42ae-be32-ba6ad0c215f7" containerName="glance-log" containerID="cri-o://4b82b717e6e11a23fd4ed5ede1cf6036a1585f3e11660179b59a0e77aceb8aac" gracePeriod=30 Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.714580 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="51de4e1c-41f4-42ae-be32-ba6ad0c215f7" containerName="glance-httpd" containerID="cri-o://f50fff902b1f5357488d2fee4492fb5988c0da38a001ab51bf7da633a84af0d5" gracePeriod=30 Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.748755 4731 scope.go:117] "RemoveContainer" containerID="6a9167a272110e6418746593fdeee614209fc2aa77060e663aa897b38d286125" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.799586 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27ztg\" (UniqueName: \"kubernetes.io/projected/4b46ae25-5a68-4181-a69e-c1620fd2c060-kube-api-access-27ztg\") pod \"4b46ae25-5a68-4181-a69e-c1620fd2c060\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.799652 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-internal-tls-certs\") pod \"4b46ae25-5a68-4181-a69e-c1620fd2c060\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.799725 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b46ae25-5a68-4181-a69e-c1620fd2c060-httpd-run\") pod \"4b46ae25-5a68-4181-a69e-c1620fd2c060\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.799742 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"4b46ae25-5a68-4181-a69e-c1620fd2c060\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.799786 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-scripts\") pod \"4b46ae25-5a68-4181-a69e-c1620fd2c060\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.799880 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-config-data\") pod \"4b46ae25-5a68-4181-a69e-c1620fd2c060\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.799921 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b46ae25-5a68-4181-a69e-c1620fd2c060-logs\") pod \"4b46ae25-5a68-4181-a69e-c1620fd2c060\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.799948 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-combined-ca-bundle\") pod \"4b46ae25-5a68-4181-a69e-c1620fd2c060\" (UID: \"4b46ae25-5a68-4181-a69e-c1620fd2c060\") " Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.803800 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b46ae25-5a68-4181-a69e-c1620fd2c060-logs" (OuterVolumeSpecName: "logs") pod "4b46ae25-5a68-4181-a69e-c1620fd2c060" (UID: "4b46ae25-5a68-4181-a69e-c1620fd2c060"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.804040 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4b46ae25-5a68-4181-a69e-c1620fd2c060-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4b46ae25-5a68-4181-a69e-c1620fd2c060" (UID: "4b46ae25-5a68-4181-a69e-c1620fd2c060"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.805232 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "4b46ae25-5a68-4181-a69e-c1620fd2c060" (UID: "4b46ae25-5a68-4181-a69e-c1620fd2c060"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.813103 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b46ae25-5a68-4181-a69e-c1620fd2c060-kube-api-access-27ztg" (OuterVolumeSpecName: "kube-api-access-27ztg") pod "4b46ae25-5a68-4181-a69e-c1620fd2c060" (UID: "4b46ae25-5a68-4181-a69e-c1620fd2c060"). InnerVolumeSpecName "kube-api-access-27ztg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.835644 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-scripts" (OuterVolumeSpecName: "scripts") pod "4b46ae25-5a68-4181-a69e-c1620fd2c060" (UID: "4b46ae25-5a68-4181-a69e-c1620fd2c060"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.871259 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b46ae25-5a68-4181-a69e-c1620fd2c060" (UID: "4b46ae25-5a68-4181-a69e-c1620fd2c060"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.898567 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-69fd4bd965-vmt4d"] Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.901035 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4b46ae25-5a68-4181-a69e-c1620fd2c060" (UID: "4b46ae25-5a68-4181-a69e-c1620fd2c060"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.904938 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27ztg\" (UniqueName: \"kubernetes.io/projected/4b46ae25-5a68-4181-a69e-c1620fd2c060-kube-api-access-27ztg\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.905001 4731 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.905014 4731 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4b46ae25-5a68-4181-a69e-c1620fd2c060-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.905078 4731 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.905093 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.905107 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4b46ae25-5a68-4181-a69e-c1620fd2c060-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.905116 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.930190 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-nc8dk"] Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.941508 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5f66fcd449-6b7kw"] Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.954162 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-549496ffb7-8jt8x"] Nov 25 15:15:04 crc kubenswrapper[4731]: W1125 15:15:04.961260 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21056706_0226_48f6_bbe2_5a8c5fb49c7d.slice/crio-21fb5d0e246edb022f528d86cb571f6988ee619ed850d33e936d43ffbba3a8d5 WatchSource:0}: Error finding container 21fb5d0e246edb022f528d86cb571f6988ee619ed850d33e936d43ffbba3a8d5: Status 404 returned error can't find the container with id 21fb5d0e246edb022f528d86cb571f6988ee619ed850d33e936d43ffbba3a8d5 Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.971257 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-574677c86-dzlj9"] Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.973648 4731 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Nov 25 15:15:04 crc kubenswrapper[4731]: I1125 15:15:04.981462 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.000169 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.002016 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-config-data" (OuterVolumeSpecName: "config-data") pod "4b46ae25-5a68-4181-a69e-c1620fd2c060" (UID: "4b46ae25-5a68-4181-a69e-c1620fd2c060"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.007659 4731 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.007686 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b46ae25-5a68-4181-a69e-c1620fd2c060-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.047413 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:05 crc kubenswrapper[4731]: E1125 15:15:05.047956 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="proxy-httpd" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.047971 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="proxy-httpd" Nov 25 15:15:05 crc kubenswrapper[4731]: E1125 15:15:05.047990 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="ceilometer-central-agent" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.047999 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="ceilometer-central-agent" Nov 25 15:15:05 crc kubenswrapper[4731]: E1125 15:15:05.048019 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b46ae25-5a68-4181-a69e-c1620fd2c060" containerName="glance-httpd" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.048025 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b46ae25-5a68-4181-a69e-c1620fd2c060" containerName="glance-httpd" Nov 25 15:15:05 crc kubenswrapper[4731]: E1125 15:15:05.048042 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b46ae25-5a68-4181-a69e-c1620fd2c060" containerName="glance-log" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.048048 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b46ae25-5a68-4181-a69e-c1620fd2c060" containerName="glance-log" Nov 25 15:15:05 crc kubenswrapper[4731]: E1125 15:15:05.048060 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="sg-core" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.048065 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="sg-core" Nov 25 15:15:05 crc kubenswrapper[4731]: E1125 15:15:05.048073 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="ceilometer-notification-agent" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.048082 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="ceilometer-notification-agent" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.048263 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="ceilometer-central-agent" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.048276 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="ceilometer-notification-agent" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.048287 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b46ae25-5a68-4181-a69e-c1620fd2c060" containerName="glance-log" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.048294 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="sg-core" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.048305 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="proxy-httpd" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.048313 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b46ae25-5a68-4181-a69e-c1620fd2c060" containerName="glance-httpd" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.051168 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.052612 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.056315 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.062907 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.111391 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-6c9d7769bd-zmnpm"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.142290 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-ffdfb7796-4v5j9"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.218021 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79f1ac0d-5b42-4d61-b552-9f6675fee139-log-httpd\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.218074 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.218121 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79f1ac0d-5b42-4d61-b552-9f6675fee139-run-httpd\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.218165 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kczz9\" (UniqueName: \"kubernetes.io/projected/79f1ac0d-5b42-4d61-b552-9f6675fee139-kube-api-access-kczz9\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.218195 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-scripts\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.218216 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.218290 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-config-data\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.320464 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.320528 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79f1ac0d-5b42-4d61-b552-9f6675fee139-run-httpd\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.320570 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kczz9\" (UniqueName: \"kubernetes.io/projected/79f1ac0d-5b42-4d61-b552-9f6675fee139-kube-api-access-kczz9\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.320600 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-scripts\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.320620 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.320654 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-config-data\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.320732 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79f1ac0d-5b42-4d61-b552-9f6675fee139-log-httpd\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.321200 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79f1ac0d-5b42-4d61-b552-9f6675fee139-log-httpd\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.324594 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79f1ac0d-5b42-4d61-b552-9f6675fee139-run-httpd\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.327640 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-scripts\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.329658 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.331505 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.340168 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.343168 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-config-data\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: W1125 15:15:05.344266 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0a7e2d6_c2ce_40f4_9b52_66ecdd265caa.slice/crio-fdc9a169f1b43ce7319836abbad5871ac8a0539cd8af8fc37ed9a3495dcb0cff WatchSource:0}: Error finding container fdc9a169f1b43ce7319836abbad5871ac8a0539cd8af8fc37ed9a3495dcb0cff: Status 404 returned error can't find the container with id fdc9a169f1b43ce7319836abbad5871ac8a0539cd8af8fc37ed9a3495dcb0cff Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.347568 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kczz9\" (UniqueName: \"kubernetes.io/projected/79f1ac0d-5b42-4d61-b552-9f6675fee139-kube-api-access-kczz9\") pod \"ceilometer-0\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.364956 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8815eebf-c8f8-49f9-94fb-5c3c454514d5","Type":"ContainerStarted","Data":"73080d29582b58768d867598efda78dd573b99921dc2f053d056c0221ef69cf0"} Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.369459 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-549496ffb7-8jt8x" event={"ID":"94fb83b0-f12c-45ee-bae8-058c7b59d28f","Type":"ContainerStarted","Data":"5964d8a0ce2cd130598866ace0688d3f4462ec9acd3b266109a08bda1c5e4581"} Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.371459 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.385878 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f66fcd449-6b7kw" event={"ID":"069c92d4-3268-4960-8aaf-e071a962a219","Type":"ContainerStarted","Data":"ddeabf12920c6acd18450d9af92cefd455d227019f5b2a0d4650a4651794a77c"} Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.389912 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69fd4bd965-vmt4d" event={"ID":"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8","Type":"ContainerStarted","Data":"d2c66e2d1b611c46a7ee53cce6a5561a9f3ebaa17c6f5db7c91633c6edab86b8"} Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.394397 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.27351756 podStartE2EDuration="15.39436841s" podCreationTimestamp="2025-11-25 15:14:50 +0000 UTC" firstStartedPulling="2025-11-25 15:14:50.835157199 +0000 UTC m=+1249.582291629" lastFinishedPulling="2025-11-25 15:15:03.956008049 +0000 UTC m=+1262.703142479" observedRunningTime="2025-11-25 15:15:05.389789724 +0000 UTC m=+1264.136924164" watchObservedRunningTime="2025-11-25 15:15:05.39436841 +0000 UTC m=+1264.141502840" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.395073 4731 generic.go:334] "Generic (PLEG): container finished" podID="51de4e1c-41f4-42ae-be32-ba6ad0c215f7" containerID="4b82b717e6e11a23fd4ed5ede1cf6036a1585f3e11660179b59a0e77aceb8aac" exitCode=143 Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.395130 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"51de4e1c-41f4-42ae-be32-ba6ad0c215f7","Type":"ContainerDied","Data":"4b82b717e6e11a23fd4ed5ede1cf6036a1585f3e11660179b59a0e77aceb8aac"} Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.408392 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" event={"ID":"eafdd6d2-3514-4de8-afc5-651d22a486a4","Type":"ContainerStarted","Data":"59dd08c1733195992d4292c6fccfba09f5b8892caf399df553899cb6b0071797"} Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.413810 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" event={"ID":"81a541bc-b19f-488c-9d8d-2c48663b4033","Type":"ContainerStarted","Data":"fe70a99642978f0cd7b9595ba2ac1fe48b1daeff76df35527f479a904808bd0c"} Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.416383 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4b46ae25-5a68-4181-a69e-c1620fd2c060","Type":"ContainerDied","Data":"cfb8146855b3354e980ac05b095b1d05b065e7148a6c8e88b5a85b9f7f8378e6"} Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.416413 4731 scope.go:117] "RemoveContainer" containerID="8452b379fa5bb1295fb1ef4d48747516b1ec0bbab83e1bf933c0805f068148cb" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.416409 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.423744 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c9d7769bd-zmnpm" event={"ID":"53a766a8-f8e4-4d2d-8202-e242a2e70d61","Type":"ContainerStarted","Data":"1629872ed3402d7ec3d68832a811effb6387e3e20c73996b5bf005075e0da0c9"} Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.435602 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-574677c86-dzlj9" event={"ID":"21056706-0226-48f6-bbe2-5a8c5fb49c7d","Type":"ContainerStarted","Data":"21fb5d0e246edb022f528d86cb571f6988ee619ed850d33e936d43ffbba3a8d5"} Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.521536 4731 scope.go:117] "RemoveContainer" containerID="a71b6d53b34fdc67921eda79cab827a4e9d70caae43bb9281ab73bd2a115d7cc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.550952 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.589089 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.590617 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-85b87f8dc-297hh" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.609895 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.615048 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6c9d7769bd-zmnpm"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.629579 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-ffdfb7796-4v5j9"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.636746 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-cdf8597bf-t8vdc"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.638129 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.646838 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.646919 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.647077 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.648622 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.656695 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.656985 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.657090 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-cdf8597bf-t8vdc"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.665335 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-8cf475f5-wpg2b"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.666456 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.677352 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.677539 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.678600 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.686753 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8cf475f5-wpg2b"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.836919 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqw77\" (UniqueName: \"kubernetes.io/projected/b713395b-6feb-411a-b13b-d35890a8b2d3-kube-api-access-fqw77\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837259 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7de698f-a9bd-4231-9cee-dd0c1c93e461-internal-tls-certs\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837280 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7de698f-a9bd-4231-9cee-dd0c1c93e461-public-tls-certs\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837305 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837338 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b713395b-6feb-411a-b13b-d35890a8b2d3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837356 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-combined-ca-bundle\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837380 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b713395b-6feb-411a-b13b-d35890a8b2d3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837402 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7de698f-a9bd-4231-9cee-dd0c1c93e461-config-data-custom\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837456 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b713395b-6feb-411a-b13b-d35890a8b2d3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837474 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b713395b-6feb-411a-b13b-d35890a8b2d3-logs\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837492 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7de698f-a9bd-4231-9cee-dd0c1c93e461-combined-ca-bundle\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837512 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b713395b-6feb-411a-b13b-d35890a8b2d3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837529 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttczt\" (UniqueName: \"kubernetes.io/projected/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-kube-api-access-ttczt\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837552 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-config-data-custom\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837580 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b713395b-6feb-411a-b13b-d35890a8b2d3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837622 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-config-data\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837638 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-public-tls-certs\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837655 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-internal-tls-certs\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837682 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7de698f-a9bd-4231-9cee-dd0c1c93e461-config-data\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.837700 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6x49\" (UniqueName: \"kubernetes.io/projected/c7de698f-a9bd-4231-9cee-dd0c1c93e461-kube-api-access-k6x49\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.940840 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b713395b-6feb-411a-b13b-d35890a8b2d3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.940916 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-config-data\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.940939 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-public-tls-certs\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.940965 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-internal-tls-certs\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.940998 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7de698f-a9bd-4231-9cee-dd0c1c93e461-config-data\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.941014 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6x49\" (UniqueName: \"kubernetes.io/projected/c7de698f-a9bd-4231-9cee-dd0c1c93e461-kube-api-access-k6x49\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.941069 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqw77\" (UniqueName: \"kubernetes.io/projected/b713395b-6feb-411a-b13b-d35890a8b2d3-kube-api-access-fqw77\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.941097 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7de698f-a9bd-4231-9cee-dd0c1c93e461-internal-tls-certs\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.941112 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7de698f-a9bd-4231-9cee-dd0c1c93e461-public-tls-certs\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.941132 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.941173 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b713395b-6feb-411a-b13b-d35890a8b2d3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.941190 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-combined-ca-bundle\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.941206 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b713395b-6feb-411a-b13b-d35890a8b2d3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.941224 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7de698f-a9bd-4231-9cee-dd0c1c93e461-config-data-custom\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.941289 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b713395b-6feb-411a-b13b-d35890a8b2d3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.941303 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b713395b-6feb-411a-b13b-d35890a8b2d3-logs\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.941322 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7de698f-a9bd-4231-9cee-dd0c1c93e461-combined-ca-bundle\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.941338 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b713395b-6feb-411a-b13b-d35890a8b2d3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.941363 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttczt\" (UniqueName: \"kubernetes.io/projected/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-kube-api-access-ttczt\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.941382 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-config-data-custom\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.942103 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-9wqvh"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.942141 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.943561 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9wqvh" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.944090 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b713395b-6feb-411a-b13b-d35890a8b2d3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.946018 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b713395b-6feb-411a-b13b-d35890a8b2d3-logs\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.953121 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-combined-ca-bundle\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.956576 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-config-data-custom\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.957689 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-internal-tls-certs\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.958406 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-config-data\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.958468 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7de698f-a9bd-4231-9cee-dd0c1c93e461-config-data\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.961009 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b713395b-6feb-411a-b13b-d35890a8b2d3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.961157 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-public-tls-certs\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.961422 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b713395b-6feb-411a-b13b-d35890a8b2d3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.962069 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9wqvh"] Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.962187 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b713395b-6feb-411a-b13b-d35890a8b2d3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.973786 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttczt\" (UniqueName: \"kubernetes.io/projected/f49ff135-870b-4b14-8ec3-a20bec8b9ba6-kube-api-access-ttczt\") pod \"heat-cfnapi-8cf475f5-wpg2b\" (UID: \"f49ff135-870b-4b14-8ec3-a20bec8b9ba6\") " pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.975474 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c7de698f-a9bd-4231-9cee-dd0c1c93e461-config-data-custom\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.982999 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqw77\" (UniqueName: \"kubernetes.io/projected/b713395b-6feb-411a-b13b-d35890a8b2d3-kube-api-access-fqw77\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:05 crc kubenswrapper[4731]: I1125 15:15:05.970458 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b713395b-6feb-411a-b13b-d35890a8b2d3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.020188 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"b713395b-6feb-411a-b13b-d35890a8b2d3\") " pod="openstack/glance-default-internal-api-0" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.022746 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.029972 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7de698f-a9bd-4231-9cee-dd0c1c93e461-combined-ca-bundle\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.030155 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7de698f-a9bd-4231-9cee-dd0c1c93e461-public-tls-certs\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.032664 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7de698f-a9bd-4231-9cee-dd0c1c93e461-internal-tls-certs\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.040492 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6x49\" (UniqueName: \"kubernetes.io/projected/c7de698f-a9bd-4231-9cee-dd0c1c93e461-kube-api-access-k6x49\") pod \"heat-api-cdf8597bf-t8vdc\" (UID: \"c7de698f-a9bd-4231-9cee-dd0c1c93e461\") " pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.041433 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-pmklk"] Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.042630 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01b0994d-ff1b-47bf-8728-965ce2899e08-operator-scripts\") pod \"nova-api-db-create-9wqvh\" (UID: \"01b0994d-ff1b-47bf-8728-965ce2899e08\") " pod="openstack/nova-api-db-create-9wqvh" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.042669 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt4v7\" (UniqueName: \"kubernetes.io/projected/01b0994d-ff1b-47bf-8728-965ce2899e08-kube-api-access-pt4v7\") pod \"nova-api-db-create-9wqvh\" (UID: \"01b0994d-ff1b-47bf-8728-965ce2899e08\") " pod="openstack/nova-api-db-create-9wqvh" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.042876 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pmklk" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.066932 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pmklk"] Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.137196 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-8101-account-create-v6vwr"] Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.138847 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8101-account-create-v6vwr" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.141983 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.144465 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4998d67-3897-41fa-8a9d-f7e37c102a61-operator-scripts\") pod \"nova-cell0-db-create-pmklk\" (UID: \"d4998d67-3897-41fa-8a9d-f7e37c102a61\") " pod="openstack/nova-cell0-db-create-pmklk" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.144562 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01b0994d-ff1b-47bf-8728-965ce2899e08-operator-scripts\") pod \"nova-api-db-create-9wqvh\" (UID: \"01b0994d-ff1b-47bf-8728-965ce2899e08\") " pod="openstack/nova-api-db-create-9wqvh" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.144585 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt4v7\" (UniqueName: \"kubernetes.io/projected/01b0994d-ff1b-47bf-8728-965ce2899e08-kube-api-access-pt4v7\") pod \"nova-api-db-create-9wqvh\" (UID: \"01b0994d-ff1b-47bf-8728-965ce2899e08\") " pod="openstack/nova-api-db-create-9wqvh" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.144606 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsp8b\" (UniqueName: \"kubernetes.io/projected/d4998d67-3897-41fa-8a9d-f7e37c102a61-kube-api-access-bsp8b\") pod \"nova-cell0-db-create-pmklk\" (UID: \"d4998d67-3897-41fa-8a9d-f7e37c102a61\") " pod="openstack/nova-cell0-db-create-pmklk" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.150056 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01b0994d-ff1b-47bf-8728-965ce2899e08-operator-scripts\") pod \"nova-api-db-create-9wqvh\" (UID: \"01b0994d-ff1b-47bf-8728-965ce2899e08\") " pod="openstack/nova-api-db-create-9wqvh" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.167160 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8101-account-create-v6vwr"] Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.180224 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt4v7\" (UniqueName: \"kubernetes.io/projected/01b0994d-ff1b-47bf-8728-965ce2899e08-kube-api-access-pt4v7\") pod \"nova-api-db-create-9wqvh\" (UID: \"01b0994d-ff1b-47bf-8728-965ce2899e08\") " pod="openstack/nova-api-db-create-9wqvh" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.254600 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33ccf64-af15-436e-ac8b-9c9efc83f08f-operator-scripts\") pod \"nova-api-8101-account-create-v6vwr\" (UID: \"b33ccf64-af15-436e-ac8b-9c9efc83f08f\") " pod="openstack/nova-api-8101-account-create-v6vwr" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.255675 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs9xp\" (UniqueName: \"kubernetes.io/projected/b33ccf64-af15-436e-ac8b-9c9efc83f08f-kube-api-access-vs9xp\") pod \"nova-api-8101-account-create-v6vwr\" (UID: \"b33ccf64-af15-436e-ac8b-9c9efc83f08f\") " pod="openstack/nova-api-8101-account-create-v6vwr" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.255797 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4998d67-3897-41fa-8a9d-f7e37c102a61-operator-scripts\") pod \"nova-cell0-db-create-pmklk\" (UID: \"d4998d67-3897-41fa-8a9d-f7e37c102a61\") " pod="openstack/nova-cell0-db-create-pmklk" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.256987 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4998d67-3897-41fa-8a9d-f7e37c102a61-operator-scripts\") pod \"nova-cell0-db-create-pmklk\" (UID: \"d4998d67-3897-41fa-8a9d-f7e37c102a61\") " pod="openstack/nova-cell0-db-create-pmklk" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.267925 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsp8b\" (UniqueName: \"kubernetes.io/projected/d4998d67-3897-41fa-8a9d-f7e37c102a61-kube-api-access-bsp8b\") pod \"nova-cell0-db-create-pmklk\" (UID: \"d4998d67-3897-41fa-8a9d-f7e37c102a61\") " pod="openstack/nova-cell0-db-create-pmklk" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.280961 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsp8b\" (UniqueName: \"kubernetes.io/projected/d4998d67-3897-41fa-8a9d-f7e37c102a61-kube-api-access-bsp8b\") pod \"nova-cell0-db-create-pmklk\" (UID: \"d4998d67-3897-41fa-8a9d-f7e37c102a61\") " pod="openstack/nova-cell0-db-create-pmklk" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.286602 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.313419 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.323435 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9wqvh" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.373833 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33ccf64-af15-436e-ac8b-9c9efc83f08f-operator-scripts\") pod \"nova-api-8101-account-create-v6vwr\" (UID: \"b33ccf64-af15-436e-ac8b-9c9efc83f08f\") " pod="openstack/nova-api-8101-account-create-v6vwr" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.373917 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs9xp\" (UniqueName: \"kubernetes.io/projected/b33ccf64-af15-436e-ac8b-9c9efc83f08f-kube-api-access-vs9xp\") pod \"nova-api-8101-account-create-v6vwr\" (UID: \"b33ccf64-af15-436e-ac8b-9c9efc83f08f\") " pod="openstack/nova-api-8101-account-create-v6vwr" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.375917 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33ccf64-af15-436e-ac8b-9c9efc83f08f-operator-scripts\") pod \"nova-api-8101-account-create-v6vwr\" (UID: \"b33ccf64-af15-436e-ac8b-9c9efc83f08f\") " pod="openstack/nova-api-8101-account-create-v6vwr" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.407516 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.414359 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs9xp\" (UniqueName: \"kubernetes.io/projected/b33ccf64-af15-436e-ac8b-9c9efc83f08f-kube-api-access-vs9xp\") pod \"nova-api-8101-account-create-v6vwr\" (UID: \"b33ccf64-af15-436e-ac8b-9c9efc83f08f\") " pod="openstack/nova-api-8101-account-create-v6vwr" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.418466 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pmklk" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.435689 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-vnjcq"] Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.437456 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vnjcq" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.446277 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vnjcq"] Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.460928 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f66fcd449-6b7kw" event={"ID":"069c92d4-3268-4960-8aaf-e071a962a219","Type":"ContainerStarted","Data":"f9eaa4336f93531e33070a847739530af2f2f0bb6ff673bc0e7843ca301bb281"} Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.461518 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.473068 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-3d9a-account-create-584gn"] Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.477725 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg4zt\" (UniqueName: \"kubernetes.io/projected/b775affe-04bb-418b-aae9-9a83db82995a-kube-api-access-dg4zt\") pod \"nova-cell1-db-create-vnjcq\" (UID: \"b775affe-04bb-418b-aae9-9a83db82995a\") " pod="openstack/nova-cell1-db-create-vnjcq" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.477897 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b775affe-04bb-418b-aae9-9a83db82995a-operator-scripts\") pod \"nova-cell1-db-create-vnjcq\" (UID: \"b775affe-04bb-418b-aae9-9a83db82995a\") " pod="openstack/nova-cell1-db-create-vnjcq" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.480291 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79f1ac0d-5b42-4d61-b552-9f6675fee139","Type":"ContainerStarted","Data":"7e6a2ba0364cab0925ecb238b108cf2914c8cd877948ca9d630be3d0b2cdf1d8"} Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.480404 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3d9a-account-create-584gn" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.480862 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-574677c86-dzlj9" event={"ID":"21056706-0226-48f6-bbe2-5a8c5fb49c7d","Type":"ContainerStarted","Data":"03faceb65b2a3a5630dfbdb2270f863cf5a2694baf59686d7f4fedb31a60a56b"} Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.481601 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8101-account-create-v6vwr" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.485947 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.486400 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-574677c86-dzlj9" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.489774 4731 generic.go:334] "Generic (PLEG): container finished" podID="eafdd6d2-3514-4de8-afc5-651d22a486a4" containerID="668df97d5a3bfb933ecad71ea728e4dc88e710ddecc92a7aaf7101c496aeb7b0" exitCode=0 Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.489863 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" event={"ID":"eafdd6d2-3514-4de8-afc5-651d22a486a4","Type":"ContainerDied","Data":"668df97d5a3bfb933ecad71ea728e4dc88e710ddecc92a7aaf7101c496aeb7b0"} Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.492906 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3d9a-account-create-584gn"] Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.531356 4731 generic.go:334] "Generic (PLEG): container finished" podID="e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa" containerID="0298863dd0030f2e6811edb297c770945b27c5d33571dc8608f54cd294a0134f" exitCode=0 Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.532256 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn" event={"ID":"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa","Type":"ContainerDied","Data":"0298863dd0030f2e6811edb297c770945b27c5d33571dc8608f54cd294a0134f"} Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.532300 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn" event={"ID":"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa","Type":"ContainerStarted","Data":"fdc9a169f1b43ce7319836abbad5871ac8a0539cd8af8fc37ed9a3495dcb0cff"} Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.540561 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5f66fcd449-6b7kw" podStartSLOduration=10.54054041 podStartE2EDuration="10.54054041s" podCreationTimestamp="2025-11-25 15:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:15:06.485617965 +0000 UTC m=+1265.232752395" watchObservedRunningTime="2025-11-25 15:15:06.54054041 +0000 UTC m=+1265.287674840" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.578846 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg4zt\" (UniqueName: \"kubernetes.io/projected/b775affe-04bb-418b-aae9-9a83db82995a-kube-api-access-dg4zt\") pod \"nova-cell1-db-create-vnjcq\" (UID: \"b775affe-04bb-418b-aae9-9a83db82995a\") " pod="openstack/nova-cell1-db-create-vnjcq" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.578936 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8km2f\" (UniqueName: \"kubernetes.io/projected/4c2c1f53-61bb-4595-a5c9-17be8b0868a0-kube-api-access-8km2f\") pod \"nova-cell0-3d9a-account-create-584gn\" (UID: \"4c2c1f53-61bb-4595-a5c9-17be8b0868a0\") " pod="openstack/nova-cell0-3d9a-account-create-584gn" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.579090 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c2c1f53-61bb-4595-a5c9-17be8b0868a0-operator-scripts\") pod \"nova-cell0-3d9a-account-create-584gn\" (UID: \"4c2c1f53-61bb-4595-a5c9-17be8b0868a0\") " pod="openstack/nova-cell0-3d9a-account-create-584gn" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.579136 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b775affe-04bb-418b-aae9-9a83db82995a-operator-scripts\") pod \"nova-cell1-db-create-vnjcq\" (UID: \"b775affe-04bb-418b-aae9-9a83db82995a\") " pod="openstack/nova-cell1-db-create-vnjcq" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.582277 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b775affe-04bb-418b-aae9-9a83db82995a-operator-scripts\") pod \"nova-cell1-db-create-vnjcq\" (UID: \"b775affe-04bb-418b-aae9-9a83db82995a\") " pod="openstack/nova-cell1-db-create-vnjcq" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.615709 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg4zt\" (UniqueName: \"kubernetes.io/projected/b775affe-04bb-418b-aae9-9a83db82995a-kube-api-access-dg4zt\") pod \"nova-cell1-db-create-vnjcq\" (UID: \"b775affe-04bb-418b-aae9-9a83db82995a\") " pod="openstack/nova-cell1-db-create-vnjcq" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.638867 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-50d5-account-create-2svk8"] Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.642561 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-50d5-account-create-2svk8" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.653432 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.657907 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-50d5-account-create-2svk8"] Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.668737 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-574677c86-dzlj9" podStartSLOduration=3.668716783 podStartE2EDuration="3.668716783s" podCreationTimestamp="2025-11-25 15:15:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:15:06.548377098 +0000 UTC m=+1265.295511528" watchObservedRunningTime="2025-11-25 15:15:06.668716783 +0000 UTC m=+1265.415851233" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.681658 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/797ff534-6f19-4979-afc8-dfd592e24603-operator-scripts\") pod \"nova-cell1-50d5-account-create-2svk8\" (UID: \"797ff534-6f19-4979-afc8-dfd592e24603\") " pod="openstack/nova-cell1-50d5-account-create-2svk8" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.682538 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8km2f\" (UniqueName: \"kubernetes.io/projected/4c2c1f53-61bb-4595-a5c9-17be8b0868a0-kube-api-access-8km2f\") pod \"nova-cell0-3d9a-account-create-584gn\" (UID: \"4c2c1f53-61bb-4595-a5c9-17be8b0868a0\") " pod="openstack/nova-cell0-3d9a-account-create-584gn" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.682741 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-728sm\" (UniqueName: \"kubernetes.io/projected/797ff534-6f19-4979-afc8-dfd592e24603-kube-api-access-728sm\") pod \"nova-cell1-50d5-account-create-2svk8\" (UID: \"797ff534-6f19-4979-afc8-dfd592e24603\") " pod="openstack/nova-cell1-50d5-account-create-2svk8" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.682960 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c2c1f53-61bb-4595-a5c9-17be8b0868a0-operator-scripts\") pod \"nova-cell0-3d9a-account-create-584gn\" (UID: \"4c2c1f53-61bb-4595-a5c9-17be8b0868a0\") " pod="openstack/nova-cell0-3d9a-account-create-584gn" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.686408 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c2c1f53-61bb-4595-a5c9-17be8b0868a0-operator-scripts\") pod \"nova-cell0-3d9a-account-create-584gn\" (UID: \"4c2c1f53-61bb-4595-a5c9-17be8b0868a0\") " pod="openstack/nova-cell0-3d9a-account-create-584gn" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.703962 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8km2f\" (UniqueName: \"kubernetes.io/projected/4c2c1f53-61bb-4595-a5c9-17be8b0868a0-kube-api-access-8km2f\") pod \"nova-cell0-3d9a-account-create-584gn\" (UID: \"4c2c1f53-61bb-4595-a5c9-17be8b0868a0\") " pod="openstack/nova-cell0-3d9a-account-create-584gn" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.785437 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-728sm\" (UniqueName: \"kubernetes.io/projected/797ff534-6f19-4979-afc8-dfd592e24603-kube-api-access-728sm\") pod \"nova-cell1-50d5-account-create-2svk8\" (UID: \"797ff534-6f19-4979-afc8-dfd592e24603\") " pod="openstack/nova-cell1-50d5-account-create-2svk8" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.785598 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/797ff534-6f19-4979-afc8-dfd592e24603-operator-scripts\") pod \"nova-cell1-50d5-account-create-2svk8\" (UID: \"797ff534-6f19-4979-afc8-dfd592e24603\") " pod="openstack/nova-cell1-50d5-account-create-2svk8" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.786953 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/797ff534-6f19-4979-afc8-dfd592e24603-operator-scripts\") pod \"nova-cell1-50d5-account-create-2svk8\" (UID: \"797ff534-6f19-4979-afc8-dfd592e24603\") " pod="openstack/nova-cell1-50d5-account-create-2svk8" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.787415 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-8cf475f5-wpg2b"] Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.819734 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-728sm\" (UniqueName: \"kubernetes.io/projected/797ff534-6f19-4979-afc8-dfd592e24603-kube-api-access-728sm\") pod \"nova-cell1-50d5-account-create-2svk8\" (UID: \"797ff534-6f19-4979-afc8-dfd592e24603\") " pod="openstack/nova-cell1-50d5-account-create-2svk8" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.846975 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b46ae25-5a68-4181-a69e-c1620fd2c060" path="/var/lib/kubelet/pods/4b46ae25-5a68-4181-a69e-c1620fd2c060/volumes" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.847959 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" path="/var/lib/kubelet/pods/5d30f6bd-f2f7-4edb-a26a-a63d92e2414c/volumes" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.892457 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vnjcq" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.909588 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3d9a-account-create-584gn" Nov 25 15:15:06 crc kubenswrapper[4731]: I1125 15:15:06.994260 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-50d5-account-create-2svk8" Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.069507 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-cdf8597bf-t8vdc"] Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.175634 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-9wqvh"] Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.192811 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Nov 25 15:15:07 crc kubenswrapper[4731]: W1125 15:15:07.213200 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb713395b_6feb_411a_b13b_d35890a8b2d3.slice/crio-f215b0e09bf5b6ab18040d6571d80e7bf7618681c8c474965323f20bb04afdfb WatchSource:0}: Error finding container f215b0e09bf5b6ab18040d6571d80e7bf7618681c8c474965323f20bb04afdfb: Status 404 returned error can't find the container with id f215b0e09bf5b6ab18040d6571d80e7bf7618681c8c474965323f20bb04afdfb Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.239980 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-8101-account-create-v6vwr"] Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.251516 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-pmklk"] Nov 25 15:15:07 crc kubenswrapper[4731]: W1125 15:15:07.265276 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb33ccf64_af15_436e_ac8b_9c9efc83f08f.slice/crio-d23193e2cef36173b4050b6b49d2b5d05ef952421b2cb5f2d8f94b92881c0643 WatchSource:0}: Error finding container d23193e2cef36173b4050b6b49d2b5d05ef952421b2cb5f2d8f94b92881c0643: Status 404 returned error can't find the container with id d23193e2cef36173b4050b6b49d2b5d05ef952421b2cb5f2d8f94b92881c0643 Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.433923 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vnjcq"] Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.607073 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3d9a-account-create-584gn"] Nov 25 15:15:07 crc kubenswrapper[4731]: W1125 15:15:07.628254 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c2c1f53_61bb_4595_a5c9_17be8b0868a0.slice/crio-f280b811315244a1ddaca41dea5a312107a2ab76f6487290a3f0f30e4297cb6c WatchSource:0}: Error finding container f280b811315244a1ddaca41dea5a312107a2ab76f6487290a3f0f30e4297cb6c: Status 404 returned error can't find the container with id f280b811315244a1ddaca41dea5a312107a2ab76f6487290a3f0f30e4297cb6c Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.629577 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pmklk" event={"ID":"d4998d67-3897-41fa-8a9d-f7e37c102a61","Type":"ContainerStarted","Data":"20012d440077a9275e93061301a71f70cf90564b615112682870de6c1e2016b8"} Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.644326 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vnjcq" event={"ID":"b775affe-04bb-418b-aae9-9a83db82995a","Type":"ContainerStarted","Data":"68a25c6ad383becb02556cc086fefc8f2f47b79aa736e9148e6e3d7c56a7ccb7"} Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.646793 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b713395b-6feb-411a-b13b-d35890a8b2d3","Type":"ContainerStarted","Data":"f215b0e09bf5b6ab18040d6571d80e7bf7618681c8c474965323f20bb04afdfb"} Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.655681 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9wqvh" event={"ID":"01b0994d-ff1b-47bf-8728-965ce2899e08","Type":"ContainerStarted","Data":"d33fb0c8b2ebeb00e69229674edade9af49c85d38a45367a2a0639952926e7a6"} Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.658006 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8cf475f5-wpg2b" event={"ID":"f49ff135-870b-4b14-8ec3-a20bec8b9ba6","Type":"ContainerStarted","Data":"a64674f91df84b46cb270cc37f543bce7f55d6134b3dfc08c3d10a616878cb49"} Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.663199 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cdf8597bf-t8vdc" event={"ID":"c7de698f-a9bd-4231-9cee-dd0c1c93e461","Type":"ContainerStarted","Data":"12ba7137ab54822e0084a56235db2acadb67e26e348a020bf6ccf9f8cd1b803f"} Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.673013 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8101-account-create-v6vwr" event={"ID":"b33ccf64-af15-436e-ac8b-9c9efc83f08f","Type":"ContainerStarted","Data":"d23193e2cef36173b4050b6b49d2b5d05ef952421b2cb5f2d8f94b92881c0643"} Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.678253 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" event={"ID":"eafdd6d2-3514-4de8-afc5-651d22a486a4","Type":"ContainerStarted","Data":"61971286662dde0d3836d01c7e4f06cd147f65bbf23e5c84d1b5b664c1fbc38e"} Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.679016 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.685395 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79f1ac0d-5b42-4d61-b552-9f6675fee139","Type":"ContainerStarted","Data":"0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b"} Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.702395 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" podStartSLOduration=11.702373115 podStartE2EDuration="11.702373115s" podCreationTimestamp="2025-11-25 15:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:15:07.697773269 +0000 UTC m=+1266.444907699" watchObservedRunningTime="2025-11-25 15:15:07.702373115 +0000 UTC m=+1266.449507535" Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.831990 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-50d5-account-create-2svk8"] Nov 25 15:15:07 crc kubenswrapper[4731]: W1125 15:15:07.832076 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod797ff534_6f19_4979_afc8_dfd592e24603.slice/crio-6b852e0b95aad22515fc9c95613c60bdc01d806c3b0683e3ae6f18434560fb7b WatchSource:0}: Error finding container 6b852e0b95aad22515fc9c95613c60bdc01d806c3b0683e3ae6f18434560fb7b: Status 404 returned error can't find the container with id 6b852e0b95aad22515fc9c95613c60bdc01d806c3b0683e3ae6f18434560fb7b Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.889091 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="51de4e1c-41f4-42ae-be32-ba6ad0c215f7" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.147:9292/healthcheck\": read tcp 10.217.0.2:39890->10.217.0.147:9292: read: connection reset by peer" Nov 25 15:15:07 crc kubenswrapper[4731]: I1125 15:15:07.889754 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="51de4e1c-41f4-42ae-be32-ba6ad0c215f7" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.147:9292/healthcheck\": read tcp 10.217.0.2:39896->10.217.0.147:9292: read: connection reset by peer" Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.701478 4731 generic.go:334] "Generic (PLEG): container finished" podID="d4998d67-3897-41fa-8a9d-f7e37c102a61" containerID="9846acd7dc9890503766b946ca9c7f6b6bb6b6911fbd69f390fb77dfc0adbd35" exitCode=0 Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.701568 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pmklk" event={"ID":"d4998d67-3897-41fa-8a9d-f7e37c102a61","Type":"ContainerDied","Data":"9846acd7dc9890503766b946ca9c7f6b6bb6b6911fbd69f390fb77dfc0adbd35"} Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.703953 4731 generic.go:334] "Generic (PLEG): container finished" podID="b775affe-04bb-418b-aae9-9a83db82995a" containerID="d9d3f7c1519d1f94e9406193a59fad784785791a8422e29b1b93cb85ffdc8dec" exitCode=0 Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.704031 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vnjcq" event={"ID":"b775affe-04bb-418b-aae9-9a83db82995a","Type":"ContainerDied","Data":"d9d3f7c1519d1f94e9406193a59fad784785791a8422e29b1b93cb85ffdc8dec"} Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.707555 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b713395b-6feb-411a-b13b-d35890a8b2d3","Type":"ContainerStarted","Data":"14aa198748610124caa8359bd2ed4effcd0dcd271dcc6d15d1a6c31464732d55"} Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.709348 4731 generic.go:334] "Generic (PLEG): container finished" podID="01b0994d-ff1b-47bf-8728-965ce2899e08" containerID="aa7f46f602fa1d54626fbdb510cd866e7a0e9485550128840459249493c5e6ec" exitCode=0 Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.709430 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9wqvh" event={"ID":"01b0994d-ff1b-47bf-8728-965ce2899e08","Type":"ContainerDied","Data":"aa7f46f602fa1d54626fbdb510cd866e7a0e9485550128840459249493c5e6ec"} Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.714763 4731 generic.go:334] "Generic (PLEG): container finished" podID="797ff534-6f19-4979-afc8-dfd592e24603" containerID="151dfab801c3d57bf692d82a7ca8bb547a4d0ffc8a5ba915c151846e679957bc" exitCode=0 Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.714875 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-50d5-account-create-2svk8" event={"ID":"797ff534-6f19-4979-afc8-dfd592e24603","Type":"ContainerDied","Data":"151dfab801c3d57bf692d82a7ca8bb547a4d0ffc8a5ba915c151846e679957bc"} Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.714936 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-50d5-account-create-2svk8" event={"ID":"797ff534-6f19-4979-afc8-dfd592e24603","Type":"ContainerStarted","Data":"6b852e0b95aad22515fc9c95613c60bdc01d806c3b0683e3ae6f18434560fb7b"} Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.723968 4731 generic.go:334] "Generic (PLEG): container finished" podID="4c2c1f53-61bb-4595-a5c9-17be8b0868a0" containerID="c95c37daa5f3d56238e13045098d3e9c791a99e322def87bbeb255fa6683322d" exitCode=0 Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.724070 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3d9a-account-create-584gn" event={"ID":"4c2c1f53-61bb-4595-a5c9-17be8b0868a0","Type":"ContainerDied","Data":"c95c37daa5f3d56238e13045098d3e9c791a99e322def87bbeb255fa6683322d"} Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.724115 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3d9a-account-create-584gn" event={"ID":"4c2c1f53-61bb-4595-a5c9-17be8b0868a0","Type":"ContainerStarted","Data":"f280b811315244a1ddaca41dea5a312107a2ab76f6487290a3f0f30e4297cb6c"} Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.726421 4731 generic.go:334] "Generic (PLEG): container finished" podID="51de4e1c-41f4-42ae-be32-ba6ad0c215f7" containerID="f50fff902b1f5357488d2fee4492fb5988c0da38a001ab51bf7da633a84af0d5" exitCode=0 Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.726504 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"51de4e1c-41f4-42ae-be32-ba6ad0c215f7","Type":"ContainerDied","Data":"f50fff902b1f5357488d2fee4492fb5988c0da38a001ab51bf7da633a84af0d5"} Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.729979 4731 generic.go:334] "Generic (PLEG): container finished" podID="b33ccf64-af15-436e-ac8b-9c9efc83f08f" containerID="384eb658513fd92b8782dbb00c5dfa23521521b7546773d77d9c8c3da88c497b" exitCode=0 Nov 25 15:15:08 crc kubenswrapper[4731]: I1125 15:15:08.731130 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8101-account-create-v6vwr" event={"ID":"b33ccf64-af15-436e-ac8b-9c9efc83f08f","Type":"ContainerDied","Data":"384eb658513fd92b8782dbb00c5dfa23521521b7546773d77d9c8c3da88c497b"} Nov 25 15:15:09 crc kubenswrapper[4731]: I1125 15:15:09.134474 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn" Nov 25 15:15:09 crc kubenswrapper[4731]: I1125 15:15:09.310437 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqfvn\" (UniqueName: \"kubernetes.io/projected/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-kube-api-access-wqfvn\") pod \"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa\" (UID: \"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa\") " Nov 25 15:15:09 crc kubenswrapper[4731]: I1125 15:15:09.310668 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-config-volume\") pod \"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa\" (UID: \"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa\") " Nov 25 15:15:09 crc kubenswrapper[4731]: I1125 15:15:09.310720 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-secret-volume\") pod \"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa\" (UID: \"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa\") " Nov 25 15:15:09 crc kubenswrapper[4731]: I1125 15:15:09.311450 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-config-volume" (OuterVolumeSpecName: "config-volume") pod "e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa" (UID: "e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:09 crc kubenswrapper[4731]: I1125 15:15:09.311678 4731 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:09 crc kubenswrapper[4731]: I1125 15:15:09.315355 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-kube-api-access-wqfvn" (OuterVolumeSpecName: "kube-api-access-wqfvn") pod "e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa" (UID: "e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa"). InnerVolumeSpecName "kube-api-access-wqfvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:09 crc kubenswrapper[4731]: I1125 15:15:09.330686 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa" (UID: "e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:09 crc kubenswrapper[4731]: I1125 15:15:09.413374 4731 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:09 crc kubenswrapper[4731]: I1125 15:15:09.413403 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqfvn\" (UniqueName: \"kubernetes.io/projected/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa-kube-api-access-wqfvn\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:09 crc kubenswrapper[4731]: I1125 15:15:09.767752 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"51de4e1c-41f4-42ae-be32-ba6ad0c215f7","Type":"ContainerDied","Data":"2f96d4cbb6358d6985d2edb591db60d5b6cfa0158e63aca3df9b2626fcfb73c8"} Nov 25 15:15:09 crc kubenswrapper[4731]: I1125 15:15:09.768126 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f96d4cbb6358d6985d2edb591db60d5b6cfa0158e63aca3df9b2626fcfb73c8" Nov 25 15:15:09 crc kubenswrapper[4731]: I1125 15:15:09.773658 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn" Nov 25 15:15:09 crc kubenswrapper[4731]: I1125 15:15:09.774546 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn" event={"ID":"e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa","Type":"ContainerDied","Data":"fdc9a169f1b43ce7319836abbad5871ac8a0539cd8af8fc37ed9a3495dcb0cff"} Nov 25 15:15:09 crc kubenswrapper[4731]: I1125 15:15:09.774582 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdc9a169f1b43ce7319836abbad5871ac8a0539cd8af8fc37ed9a3495dcb0cff" Nov 25 15:15:09 crc kubenswrapper[4731]: I1125 15:15:09.854808 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.024979 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-config-data\") pod \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.025032 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-public-tls-certs\") pod \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.025091 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-logs\") pod \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.025130 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5wd4\" (UniqueName: \"kubernetes.io/projected/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-kube-api-access-r5wd4\") pod \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.025159 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-combined-ca-bundle\") pod \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.025196 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-httpd-run\") pod \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.025226 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-scripts\") pod \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.025241 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\" (UID: \"51de4e1c-41f4-42ae-be32-ba6ad0c215f7\") " Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.035295 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "51de4e1c-41f4-42ae-be32-ba6ad0c215f7" (UID: "51de4e1c-41f4-42ae-be32-ba6ad0c215f7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.042336 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "glance") pod "51de4e1c-41f4-42ae-be32-ba6ad0c215f7" (UID: "51de4e1c-41f4-42ae-be32-ba6ad0c215f7"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.042484 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-logs" (OuterVolumeSpecName: "logs") pod "51de4e1c-41f4-42ae-be32-ba6ad0c215f7" (UID: "51de4e1c-41f4-42ae-be32-ba6ad0c215f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.048962 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-scripts" (OuterVolumeSpecName: "scripts") pod "51de4e1c-41f4-42ae-be32-ba6ad0c215f7" (UID: "51de4e1c-41f4-42ae-be32-ba6ad0c215f7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.066024 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-kube-api-access-r5wd4" (OuterVolumeSpecName: "kube-api-access-r5wd4") pod "51de4e1c-41f4-42ae-be32-ba6ad0c215f7" (UID: "51de4e1c-41f4-42ae-be32-ba6ad0c215f7"). InnerVolumeSpecName "kube-api-access-r5wd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.096882 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51de4e1c-41f4-42ae-be32-ba6ad0c215f7" (UID: "51de4e1c-41f4-42ae-be32-ba6ad0c215f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.110268 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-config-data" (OuterVolumeSpecName: "config-data") pod "51de4e1c-41f4-42ae-be32-ba6ad0c215f7" (UID: "51de4e1c-41f4-42ae-be32-ba6ad0c215f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.128482 4731 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-httpd-run\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.128740 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.128867 4731 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.128955 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.129017 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.129074 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5wd4\" (UniqueName: \"kubernetes.io/projected/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-kube-api-access-r5wd4\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.129132 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.190402 4731 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.230693 4731 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.248091 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "51de4e1c-41f4-42ae-be32-ba6ad0c215f7" (UID: "51de4e1c-41f4-42ae-be32-ba6ad0c215f7"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.356931 4731 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/51de4e1c-41f4-42ae-be32-ba6ad0c215f7-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.798073 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8101-account-create-v6vwr" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.815947 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vnjcq" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.842027 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9wqvh" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.860314 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-50d5-account-create-2svk8" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.869442 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3d9a-account-create-584gn" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.893221 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-50d5-account-create-2svk8" event={"ID":"797ff534-6f19-4979-afc8-dfd592e24603","Type":"ContainerDied","Data":"6b852e0b95aad22515fc9c95613c60bdc01d806c3b0683e3ae6f18434560fb7b"} Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.893486 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b852e0b95aad22515fc9c95613c60bdc01d806c3b0683e3ae6f18434560fb7b" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.894074 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" event={"ID":"81a541bc-b19f-488c-9d8d-2c48663b4033","Type":"ContainerStarted","Data":"ced4dbbb040f60c7e27942c96447d329de7b016d7725882f1b2ee13ceea24e78"} Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.894289 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" podUID="81a541bc-b19f-488c-9d8d-2c48663b4033" containerName="heat-cfnapi" containerID="cri-o://ced4dbbb040f60c7e27942c96447d329de7b016d7725882f1b2ee13ceea24e78" gracePeriod=60 Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.894593 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.897239 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vs9xp\" (UniqueName: \"kubernetes.io/projected/b33ccf64-af15-436e-ac8b-9c9efc83f08f-kube-api-access-vs9xp\") pod \"b33ccf64-af15-436e-ac8b-9c9efc83f08f\" (UID: \"b33ccf64-af15-436e-ac8b-9c9efc83f08f\") " Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.897452 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dg4zt\" (UniqueName: \"kubernetes.io/projected/b775affe-04bb-418b-aae9-9a83db82995a-kube-api-access-dg4zt\") pod \"b775affe-04bb-418b-aae9-9a83db82995a\" (UID: \"b775affe-04bb-418b-aae9-9a83db82995a\") " Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.897550 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b775affe-04bb-418b-aae9-9a83db82995a-operator-scripts\") pod \"b775affe-04bb-418b-aae9-9a83db82995a\" (UID: \"b775affe-04bb-418b-aae9-9a83db82995a\") " Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.897674 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33ccf64-af15-436e-ac8b-9c9efc83f08f-operator-scripts\") pod \"b33ccf64-af15-436e-ac8b-9c9efc83f08f\" (UID: \"b33ccf64-af15-436e-ac8b-9c9efc83f08f\") " Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.906144 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69fd4bd965-vmt4d" event={"ID":"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8","Type":"ContainerStarted","Data":"1afab7ff5f420277172171948c6fc857eceb6778475b8444826234ef3bb845c3"} Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.906205 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b33ccf64-af15-436e-ac8b-9c9efc83f08f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b33ccf64-af15-436e-ac8b-9c9efc83f08f" (UID: "b33ccf64-af15-436e-ac8b-9c9efc83f08f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.907085 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.913435 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pmklk" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.913745 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b33ccf64-af15-436e-ac8b-9c9efc83f08f-kube-api-access-vs9xp" (OuterVolumeSpecName: "kube-api-access-vs9xp") pod "b33ccf64-af15-436e-ac8b-9c9efc83f08f" (UID: "b33ccf64-af15-436e-ac8b-9c9efc83f08f"). InnerVolumeSpecName "kube-api-access-vs9xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.915611 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b775affe-04bb-418b-aae9-9a83db82995a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b775affe-04bb-418b-aae9-9a83db82995a" (UID: "b775affe-04bb-418b-aae9-9a83db82995a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.925479 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b775affe-04bb-418b-aae9-9a83db82995a-kube-api-access-dg4zt" (OuterVolumeSpecName: "kube-api-access-dg4zt") pod "b775affe-04bb-418b-aae9-9a83db82995a" (UID: "b775affe-04bb-418b-aae9-9a83db82995a"). InnerVolumeSpecName "kube-api-access-dg4zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.925723 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79f1ac0d-5b42-4d61-b552-9f6675fee139","Type":"ContainerStarted","Data":"ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4"} Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.935181 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" podStartSLOduration=10.378145083 podStartE2EDuration="14.935162216s" podCreationTimestamp="2025-11-25 15:14:56 +0000 UTC" firstStartedPulling="2025-11-25 15:15:05.156459079 +0000 UTC m=+1263.903593509" lastFinishedPulling="2025-11-25 15:15:09.713476212 +0000 UTC m=+1268.460610642" observedRunningTime="2025-11-25 15:15:10.931579896 +0000 UTC m=+1269.678714326" watchObservedRunningTime="2025-11-25 15:15:10.935162216 +0000 UTC m=+1269.682296646" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.947833 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-6c9d7769bd-zmnpm" podUID="53a766a8-f8e4-4d2d-8202-e242a2e70d61" containerName="heat-api" containerID="cri-o://162e430b570dec75d9b8b874e4fd615530a2570d7548f84ed88d21cc5f79e912" gracePeriod=60 Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.947912 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c9d7769bd-zmnpm" event={"ID":"53a766a8-f8e4-4d2d-8202-e242a2e70d61","Type":"ContainerStarted","Data":"162e430b570dec75d9b8b874e4fd615530a2570d7548f84ed88d21cc5f79e912"} Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.947938 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.976549 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-549496ffb7-8jt8x" event={"ID":"94fb83b0-f12c-45ee-bae8-058c7b59d28f","Type":"ContainerStarted","Data":"fa2fe57adf96583667c2e765fdf52ee1d68ea2f56495a0f45f5cbc53a43ca6a1"} Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.977182 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.983658 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vnjcq" event={"ID":"b775affe-04bb-418b-aae9-9a83db82995a","Type":"ContainerDied","Data":"68a25c6ad383becb02556cc086fefc8f2f47b79aa736e9148e6e3d7c56a7ccb7"} Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.983686 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68a25c6ad383becb02556cc086fefc8f2f47b79aa736e9148e6e3d7c56a7ccb7" Nov 25 15:15:10 crc kubenswrapper[4731]: I1125 15:15:10.983748 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vnjcq" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:10.999295 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/797ff534-6f19-4979-afc8-dfd592e24603-operator-scripts\") pod \"797ff534-6f19-4979-afc8-dfd592e24603\" (UID: \"797ff534-6f19-4979-afc8-dfd592e24603\") " Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:10.999400 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4998d67-3897-41fa-8a9d-f7e37c102a61-operator-scripts\") pod \"d4998d67-3897-41fa-8a9d-f7e37c102a61\" (UID: \"d4998d67-3897-41fa-8a9d-f7e37c102a61\") " Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:10.999481 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8km2f\" (UniqueName: \"kubernetes.io/projected/4c2c1f53-61bb-4595-a5c9-17be8b0868a0-kube-api-access-8km2f\") pod \"4c2c1f53-61bb-4595-a5c9-17be8b0868a0\" (UID: \"4c2c1f53-61bb-4595-a5c9-17be8b0868a0\") " Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:10.999512 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01b0994d-ff1b-47bf-8728-965ce2899e08-operator-scripts\") pod \"01b0994d-ff1b-47bf-8728-965ce2899e08\" (UID: \"01b0994d-ff1b-47bf-8728-965ce2899e08\") " Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:10.999543 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt4v7\" (UniqueName: \"kubernetes.io/projected/01b0994d-ff1b-47bf-8728-965ce2899e08-kube-api-access-pt4v7\") pod \"01b0994d-ff1b-47bf-8728-965ce2899e08\" (UID: \"01b0994d-ff1b-47bf-8728-965ce2899e08\") " Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:10.999567 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c2c1f53-61bb-4595-a5c9-17be8b0868a0-operator-scripts\") pod \"4c2c1f53-61bb-4595-a5c9-17be8b0868a0\" (UID: \"4c2c1f53-61bb-4595-a5c9-17be8b0868a0\") " Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:10.999605 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsp8b\" (UniqueName: \"kubernetes.io/projected/d4998d67-3897-41fa-8a9d-f7e37c102a61-kube-api-access-bsp8b\") pod \"d4998d67-3897-41fa-8a9d-f7e37c102a61\" (UID: \"d4998d67-3897-41fa-8a9d-f7e37c102a61\") " Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:10.999622 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-728sm\" (UniqueName: \"kubernetes.io/projected/797ff534-6f19-4979-afc8-dfd592e24603-kube-api-access-728sm\") pod \"797ff534-6f19-4979-afc8-dfd592e24603\" (UID: \"797ff534-6f19-4979-afc8-dfd592e24603\") " Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.001168 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c2c1f53-61bb-4595-a5c9-17be8b0868a0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c2c1f53-61bb-4595-a5c9-17be8b0868a0" (UID: "4c2c1f53-61bb-4595-a5c9-17be8b0868a0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.001489 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01b0994d-ff1b-47bf-8728-965ce2899e08-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "01b0994d-ff1b-47bf-8728-965ce2899e08" (UID: "01b0994d-ff1b-47bf-8728-965ce2899e08"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.002902 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/797ff534-6f19-4979-afc8-dfd592e24603-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "797ff534-6f19-4979-afc8-dfd592e24603" (UID: "797ff534-6f19-4979-afc8-dfd592e24603"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.003262 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4998d67-3897-41fa-8a9d-f7e37c102a61-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4998d67-3897-41fa-8a9d-f7e37c102a61" (UID: "d4998d67-3897-41fa-8a9d-f7e37c102a61"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.003933 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4998d67-3897-41fa-8a9d-f7e37c102a61-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.003951 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b33ccf64-af15-436e-ac8b-9c9efc83f08f-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.003960 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01b0994d-ff1b-47bf-8728-965ce2899e08-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.003970 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c2c1f53-61bb-4595-a5c9-17be8b0868a0-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.003979 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vs9xp\" (UniqueName: \"kubernetes.io/projected/b33ccf64-af15-436e-ac8b-9c9efc83f08f-kube-api-access-vs9xp\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.003990 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/797ff534-6f19-4979-afc8-dfd592e24603-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.004000 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dg4zt\" (UniqueName: \"kubernetes.io/projected/b775affe-04bb-418b-aae9-9a83db82995a-kube-api-access-dg4zt\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.004008 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b775affe-04bb-418b-aae9-9a83db82995a-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.018523 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b0994d-ff1b-47bf-8728-965ce2899e08-kube-api-access-pt4v7" (OuterVolumeSpecName: "kube-api-access-pt4v7") pod "01b0994d-ff1b-47bf-8728-965ce2899e08" (UID: "01b0994d-ff1b-47bf-8728-965ce2899e08"). InnerVolumeSpecName "kube-api-access-pt4v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.021563 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c2c1f53-61bb-4595-a5c9-17be8b0868a0-kube-api-access-8km2f" (OuterVolumeSpecName: "kube-api-access-8km2f") pod "4c2c1f53-61bb-4595-a5c9-17be8b0868a0" (UID: "4c2c1f53-61bb-4595-a5c9-17be8b0868a0"). InnerVolumeSpecName "kube-api-access-8km2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.021829 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4998d67-3897-41fa-8a9d-f7e37c102a61-kube-api-access-bsp8b" (OuterVolumeSpecName: "kube-api-access-bsp8b") pod "d4998d67-3897-41fa-8a9d-f7e37c102a61" (UID: "d4998d67-3897-41fa-8a9d-f7e37c102a61"). InnerVolumeSpecName "kube-api-access-bsp8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.022005 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797ff534-6f19-4979-afc8-dfd592e24603-kube-api-access-728sm" (OuterVolumeSpecName: "kube-api-access-728sm") pod "797ff534-6f19-4979-afc8-dfd592e24603" (UID: "797ff534-6f19-4979-afc8-dfd592e24603"). InnerVolumeSpecName "kube-api-access-728sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.024618 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-9wqvh" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.025859 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-9wqvh" event={"ID":"01b0994d-ff1b-47bf-8728-965ce2899e08","Type":"ContainerDied","Data":"d33fb0c8b2ebeb00e69229674edade9af49c85d38a45367a2a0639952926e7a6"} Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.025893 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33fb0c8b2ebeb00e69229674edade9af49c85d38a45367a2a0639952926e7a6" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.045253 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-69fd4bd965-vmt4d" podStartSLOduration=3.252572806 podStartE2EDuration="8.045235923s" podCreationTimestamp="2025-11-25 15:15:03 +0000 UTC" firstStartedPulling="2025-11-25 15:15:04.921191845 +0000 UTC m=+1263.668326275" lastFinishedPulling="2025-11-25 15:15:09.713854962 +0000 UTC m=+1268.460989392" observedRunningTime="2025-11-25 15:15:10.978601532 +0000 UTC m=+1269.725735962" watchObservedRunningTime="2025-11-25 15:15:11.045235923 +0000 UTC m=+1269.792370353" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.051828 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3d9a-account-create-584gn" event={"ID":"4c2c1f53-61bb-4595-a5c9-17be8b0868a0","Type":"ContainerDied","Data":"f280b811315244a1ddaca41dea5a312107a2ab76f6487290a3f0f30e4297cb6c"} Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.051867 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f280b811315244a1ddaca41dea5a312107a2ab76f6487290a3f0f30e4297cb6c" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.051869 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3d9a-account-create-584gn" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.056151 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.056259 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-8101-account-create-v6vwr" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.056301 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-8101-account-create-v6vwr" event={"ID":"b33ccf64-af15-436e-ac8b-9c9efc83f08f","Type":"ContainerDied","Data":"d23193e2cef36173b4050b6b49d2b5d05ef952421b2cb5f2d8f94b92881c0643"} Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.058712 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d23193e2cef36173b4050b6b49d2b5d05ef952421b2cb5f2d8f94b92881c0643" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.110860 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8km2f\" (UniqueName: \"kubernetes.io/projected/4c2c1f53-61bb-4595-a5c9-17be8b0868a0-kube-api-access-8km2f\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.110881 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt4v7\" (UniqueName: \"kubernetes.io/projected/01b0994d-ff1b-47bf-8728-965ce2899e08-kube-api-access-pt4v7\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.110892 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsp8b\" (UniqueName: \"kubernetes.io/projected/d4998d67-3897-41fa-8a9d-f7e37c102a61-kube-api-access-bsp8b\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.110901 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-728sm\" (UniqueName: \"kubernetes.io/projected/797ff534-6f19-4979-afc8-dfd592e24603-kube-api-access-728sm\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.180224 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-6c9d7769bd-zmnpm" podStartSLOduration=10.533295916 podStartE2EDuration="15.180198307s" podCreationTimestamp="2025-11-25 15:14:56 +0000 UTC" firstStartedPulling="2025-11-25 15:15:05.117625989 +0000 UTC m=+1263.864760419" lastFinishedPulling="2025-11-25 15:15:09.76452838 +0000 UTC m=+1268.511662810" observedRunningTime="2025-11-25 15:15:11.020272133 +0000 UTC m=+1269.767406563" watchObservedRunningTime="2025-11-25 15:15:11.180198307 +0000 UTC m=+1269.927332737" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.279515 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-549496ffb7-8jt8x" podStartSLOduration=3.471425196 podStartE2EDuration="8.279495521s" podCreationTimestamp="2025-11-25 15:15:03 +0000 UTC" firstStartedPulling="2025-11-25 15:15:04.955916011 +0000 UTC m=+1263.703050441" lastFinishedPulling="2025-11-25 15:15:09.763986336 +0000 UTC m=+1268.511120766" observedRunningTime="2025-11-25 15:15:11.046472194 +0000 UTC m=+1269.793606614" watchObservedRunningTime="2025-11-25 15:15:11.279495521 +0000 UTC m=+1270.026629951" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.319482 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.340404 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.361881 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:15:11 crc kubenswrapper[4731]: E1125 15:15:11.362450 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4998d67-3897-41fa-8a9d-f7e37c102a61" containerName="mariadb-database-create" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.362478 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4998d67-3897-41fa-8a9d-f7e37c102a61" containerName="mariadb-database-create" Nov 25 15:15:11 crc kubenswrapper[4731]: E1125 15:15:11.362501 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51de4e1c-41f4-42ae-be32-ba6ad0c215f7" containerName="glance-httpd" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.362511 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="51de4e1c-41f4-42ae-be32-ba6ad0c215f7" containerName="glance-httpd" Nov 25 15:15:11 crc kubenswrapper[4731]: E1125 15:15:11.362536 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa" containerName="collect-profiles" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.362544 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa" containerName="collect-profiles" Nov 25 15:15:11 crc kubenswrapper[4731]: E1125 15:15:11.362554 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797ff534-6f19-4979-afc8-dfd592e24603" containerName="mariadb-account-create" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.362561 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="797ff534-6f19-4979-afc8-dfd592e24603" containerName="mariadb-account-create" Nov 25 15:15:11 crc kubenswrapper[4731]: E1125 15:15:11.362575 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2c1f53-61bb-4595-a5c9-17be8b0868a0" containerName="mariadb-account-create" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.362584 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2c1f53-61bb-4595-a5c9-17be8b0868a0" containerName="mariadb-account-create" Nov 25 15:15:11 crc kubenswrapper[4731]: E1125 15:15:11.362608 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b775affe-04bb-418b-aae9-9a83db82995a" containerName="mariadb-database-create" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.362617 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b775affe-04bb-418b-aae9-9a83db82995a" containerName="mariadb-database-create" Nov 25 15:15:11 crc kubenswrapper[4731]: E1125 15:15:11.362636 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51de4e1c-41f4-42ae-be32-ba6ad0c215f7" containerName="glance-log" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.362645 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="51de4e1c-41f4-42ae-be32-ba6ad0c215f7" containerName="glance-log" Nov 25 15:15:11 crc kubenswrapper[4731]: E1125 15:15:11.362659 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b33ccf64-af15-436e-ac8b-9c9efc83f08f" containerName="mariadb-account-create" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.362667 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b33ccf64-af15-436e-ac8b-9c9efc83f08f" containerName="mariadb-account-create" Nov 25 15:15:11 crc kubenswrapper[4731]: E1125 15:15:11.362680 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b0994d-ff1b-47bf-8728-965ce2899e08" containerName="mariadb-database-create" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.362687 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b0994d-ff1b-47bf-8728-965ce2899e08" containerName="mariadb-database-create" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.362914 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b33ccf64-af15-436e-ac8b-9c9efc83f08f" containerName="mariadb-account-create" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.362929 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b775affe-04bb-418b-aae9-9a83db82995a" containerName="mariadb-database-create" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.362947 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="797ff534-6f19-4979-afc8-dfd592e24603" containerName="mariadb-account-create" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.362969 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4998d67-3897-41fa-8a9d-f7e37c102a61" containerName="mariadb-database-create" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.362984 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa" containerName="collect-profiles" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.362997 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="51de4e1c-41f4-42ae-be32-ba6ad0c215f7" containerName="glance-log" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.363005 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="51de4e1c-41f4-42ae-be32-ba6ad0c215f7" containerName="glance-httpd" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.363017 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="01b0994d-ff1b-47bf-8728-965ce2899e08" containerName="mariadb-database-create" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.363030 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c2c1f53-61bb-4595-a5c9-17be8b0868a0" containerName="mariadb-account-create" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.364326 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.375115 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.375312 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.375447 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.449346 4731 scope.go:117] "RemoveContainer" containerID="07f98050d042bcc2f60da5f63c2eb29d4bc00fc0d808f2c5425ea58e6b7ef000" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.476432 4731 scope.go:117] "RemoveContainer" containerID="72b9a78215a7dc2f1610f5130dda14e96c01fbf8797e50ce7b2de37d56a58bfa" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.529116 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e0de1e-5e50-468c-a911-41b1c3e7261a-config-data\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.529482 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e0de1e-5e50-468c-a911-41b1c3e7261a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.529511 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f6e0de1e-5e50-468c-a911-41b1c3e7261a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.529538 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e0de1e-5e50-468c-a911-41b1c3e7261a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.529591 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6e0de1e-5e50-468c-a911-41b1c3e7261a-scripts\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.529618 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e0de1e-5e50-468c-a911-41b1c3e7261a-logs\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.529705 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.529740 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6smp\" (UniqueName: \"kubernetes.io/projected/f6e0de1e-5e50-468c-a911-41b1c3e7261a-kube-api-access-s6smp\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.540665 4731 scope.go:117] "RemoveContainer" containerID="617f3cf790c14ac79f680e4803e057e04bf56a7e9b7fafed50f9ece22ce8b1a4" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.631764 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.631851 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6smp\" (UniqueName: \"kubernetes.io/projected/f6e0de1e-5e50-468c-a911-41b1c3e7261a-kube-api-access-s6smp\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.632275 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.633545 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e0de1e-5e50-468c-a911-41b1c3e7261a-config-data\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.633610 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e0de1e-5e50-468c-a911-41b1c3e7261a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.633646 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f6e0de1e-5e50-468c-a911-41b1c3e7261a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.633684 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e0de1e-5e50-468c-a911-41b1c3e7261a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.633809 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6e0de1e-5e50-468c-a911-41b1c3e7261a-scripts\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.633868 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e0de1e-5e50-468c-a911-41b1c3e7261a-logs\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.634406 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f6e0de1e-5e50-468c-a911-41b1c3e7261a-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.634466 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6e0de1e-5e50-468c-a911-41b1c3e7261a-logs\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.639019 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6e0de1e-5e50-468c-a911-41b1c3e7261a-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.643544 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6e0de1e-5e50-468c-a911-41b1c3e7261a-config-data\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.644583 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6e0de1e-5e50-468c-a911-41b1c3e7261a-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.656461 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f6e0de1e-5e50-468c-a911-41b1c3e7261a-scripts\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.660178 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6smp\" (UniqueName: \"kubernetes.io/projected/f6e0de1e-5e50-468c-a911-41b1c3e7261a-kube-api-access-s6smp\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.680026 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"glance-default-external-api-0\" (UID: \"f6e0de1e-5e50-468c-a911-41b1c3e7261a\") " pod="openstack/glance-default-external-api-0" Nov 25 15:15:11 crc kubenswrapper[4731]: I1125 15:15:11.773024 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.072863 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-cdf8597bf-t8vdc" event={"ID":"c7de698f-a9bd-4231-9cee-dd0c1c93e461","Type":"ContainerStarted","Data":"2d7ff8236815707763643887db7f998c64017e05f0d757ee9387d0d010831db7"} Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.073902 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.075945 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-pmklk" Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.076148 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-pmklk" event={"ID":"d4998d67-3897-41fa-8a9d-f7e37c102a61","Type":"ContainerDied","Data":"20012d440077a9275e93061301a71f70cf90564b615112682870de6c1e2016b8"} Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.076200 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20012d440077a9275e93061301a71f70cf90564b615112682870de6c1e2016b8" Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.080308 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-8cf475f5-wpg2b" event={"ID":"f49ff135-870b-4b14-8ec3-a20bec8b9ba6","Type":"ContainerStarted","Data":"39530ea5d6715cda308df11fc918cf86de41de4852da1ac863a80c8f73c23ba4"} Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.081688 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.086889 4731 generic.go:334] "Generic (PLEG): container finished" podID="a9e3145a-f6c6-47d6-9241-2a919c8a7ac8" containerID="1afab7ff5f420277172171948c6fc857eceb6778475b8444826234ef3bb845c3" exitCode=1 Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.086984 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69fd4bd965-vmt4d" event={"ID":"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8","Type":"ContainerDied","Data":"1afab7ff5f420277172171948c6fc857eceb6778475b8444826234ef3bb845c3"} Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.087588 4731 scope.go:117] "RemoveContainer" containerID="1afab7ff5f420277172171948c6fc857eceb6778475b8444826234ef3bb845c3" Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.118117 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79f1ac0d-5b42-4d61-b552-9f6675fee139","Type":"ContainerStarted","Data":"32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc"} Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.119191 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-cdf8597bf-t8vdc" podStartSLOduration=4.472437332 podStartE2EDuration="7.119166701s" podCreationTimestamp="2025-11-25 15:15:05 +0000 UTC" firstStartedPulling="2025-11-25 15:15:07.18833031 +0000 UTC m=+1265.935464740" lastFinishedPulling="2025-11-25 15:15:09.835059679 +0000 UTC m=+1268.582194109" observedRunningTime="2025-11-25 15:15:12.098976762 +0000 UTC m=+1270.846111212" watchObservedRunningTime="2025-11-25 15:15:12.119166701 +0000 UTC m=+1270.866301131" Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.124175 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.130429 4731 generic.go:334] "Generic (PLEG): container finished" podID="94fb83b0-f12c-45ee-bae8-058c7b59d28f" containerID="fa2fe57adf96583667c2e765fdf52ee1d68ea2f56495a0f45f5cbc53a43ca6a1" exitCode=1 Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.134087 4731 scope.go:117] "RemoveContainer" containerID="fa2fe57adf96583667c2e765fdf52ee1d68ea2f56495a0f45f5cbc53a43ca6a1" Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.135158 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-549496ffb7-8jt8x" event={"ID":"94fb83b0-f12c-45ee-bae8-058c7b59d28f","Type":"ContainerDied","Data":"fa2fe57adf96583667c2e765fdf52ee1d68ea2f56495a0f45f5cbc53a43ca6a1"} Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.140520 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-50d5-account-create-2svk8" Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.141579 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b713395b-6feb-411a-b13b-d35890a8b2d3","Type":"ContainerStarted","Data":"1ad85046be05e7efff05f7be6ff4b4e0c1dd890368fe34c68905fd87a6222053"} Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.202641 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-8cf475f5-wpg2b" podStartSLOduration=4.231435183 podStartE2EDuration="7.202613846s" podCreationTimestamp="2025-11-25 15:15:05 +0000 UTC" firstStartedPulling="2025-11-25 15:15:06.792914946 +0000 UTC m=+1265.540049376" lastFinishedPulling="2025-11-25 15:15:09.764093609 +0000 UTC m=+1268.511228039" observedRunningTime="2025-11-25 15:15:12.122806973 +0000 UTC m=+1270.869941413" watchObservedRunningTime="2025-11-25 15:15:12.202613846 +0000 UTC m=+1270.949748286" Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.310120 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-vksdl"] Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.310564 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-vksdl" podUID="0bdd6185-ea4d-4507-b925-5f093fc33931" containerName="dnsmasq-dns" containerID="cri-o://699cdda3f9173602e313c50437b2b7f372b5b88d2a6a62475d9d7c8d499abfda" gracePeriod=10 Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.312801 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.312778464 podStartE2EDuration="7.312778464s" podCreationTimestamp="2025-11-25 15:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:15:12.253621312 +0000 UTC m=+1271.000755742" watchObservedRunningTime="2025-11-25 15:15:12.312778464 +0000 UTC m=+1271.059912884" Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.530547 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Nov 25 15:15:12 crc kubenswrapper[4731]: W1125 15:15:12.541925 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6e0de1e_5e50_468c_a911_41b1c3e7261a.slice/crio-db796f61bd9549ab49b9fce46a6ff6a0527208381080f0fe3ed5930e2ea2b0dd WatchSource:0}: Error finding container db796f61bd9549ab49b9fce46a6ff6a0527208381080f0fe3ed5930e2ea2b0dd: Status 404 returned error can't find the container with id db796f61bd9549ab49b9fce46a6ff6a0527208381080f0fe3ed5930e2ea2b0dd Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.648270 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-vksdl" podUID="0bdd6185-ea4d-4507-b925-5f093fc33931" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.160:5353: connect: connection refused" Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.864098 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51de4e1c-41f4-42ae-be32-ba6ad0c215f7" path="/var/lib/kubelet/pods/51de4e1c-41f4-42ae-be32-ba6ad0c215f7/volumes" Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.910187 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.981926 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-config\") pod \"0bdd6185-ea4d-4507-b925-5f093fc33931\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.982047 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-ovsdbserver-nb\") pod \"0bdd6185-ea4d-4507-b925-5f093fc33931\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.982087 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66t9v\" (UniqueName: \"kubernetes.io/projected/0bdd6185-ea4d-4507-b925-5f093fc33931-kube-api-access-66t9v\") pod \"0bdd6185-ea4d-4507-b925-5f093fc33931\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.982173 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-dns-svc\") pod \"0bdd6185-ea4d-4507-b925-5f093fc33931\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.982216 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-dns-swift-storage-0\") pod \"0bdd6185-ea4d-4507-b925-5f093fc33931\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.982861 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-ovsdbserver-sb\") pod \"0bdd6185-ea4d-4507-b925-5f093fc33931\" (UID: \"0bdd6185-ea4d-4507-b925-5f093fc33931\") " Nov 25 15:15:12 crc kubenswrapper[4731]: I1125 15:15:12.987207 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bdd6185-ea4d-4507-b925-5f093fc33931-kube-api-access-66t9v" (OuterVolumeSpecName: "kube-api-access-66t9v") pod "0bdd6185-ea4d-4507-b925-5f093fc33931" (UID: "0bdd6185-ea4d-4507-b925-5f093fc33931"). InnerVolumeSpecName "kube-api-access-66t9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.038044 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0bdd6185-ea4d-4507-b925-5f093fc33931" (UID: "0bdd6185-ea4d-4507-b925-5f093fc33931"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.045543 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-config" (OuterVolumeSpecName: "config") pod "0bdd6185-ea4d-4507-b925-5f093fc33931" (UID: "0bdd6185-ea4d-4507-b925-5f093fc33931"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.055088 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0bdd6185-ea4d-4507-b925-5f093fc33931" (UID: "0bdd6185-ea4d-4507-b925-5f093fc33931"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.067671 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0bdd6185-ea4d-4507-b925-5f093fc33931" (UID: "0bdd6185-ea4d-4507-b925-5f093fc33931"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.084682 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0bdd6185-ea4d-4507-b925-5f093fc33931" (UID: "0bdd6185-ea4d-4507-b925-5f093fc33931"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.085318 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.085334 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.085343 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.085354 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.085363 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66t9v\" (UniqueName: \"kubernetes.io/projected/0bdd6185-ea4d-4507-b925-5f093fc33931-kube-api-access-66t9v\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.085374 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0bdd6185-ea4d-4507-b925-5f093fc33931-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.162193 4731 generic.go:334] "Generic (PLEG): container finished" podID="0bdd6185-ea4d-4507-b925-5f093fc33931" containerID="699cdda3f9173602e313c50437b2b7f372b5b88d2a6a62475d9d7c8d499abfda" exitCode=0 Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.162536 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-vksdl" event={"ID":"0bdd6185-ea4d-4507-b925-5f093fc33931","Type":"ContainerDied","Data":"699cdda3f9173602e313c50437b2b7f372b5b88d2a6a62475d9d7c8d499abfda"} Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.162564 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-vksdl" event={"ID":"0bdd6185-ea4d-4507-b925-5f093fc33931","Type":"ContainerDied","Data":"daabd4dcd26f8e97f68fccc519397e8481f4d620380ab559ee20a87334631a91"} Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.162580 4731 scope.go:117] "RemoveContainer" containerID="699cdda3f9173602e313c50437b2b7f372b5b88d2a6a62475d9d7c8d499abfda" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.162700 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-vksdl" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.168631 4731 generic.go:334] "Generic (PLEG): container finished" podID="94fb83b0-f12c-45ee-bae8-058c7b59d28f" containerID="3e686da5c1694ef973dd392dd3f528227a2dceef9fc7eab8736b4d8a38ae8739" exitCode=1 Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.168699 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-549496ffb7-8jt8x" event={"ID":"94fb83b0-f12c-45ee-bae8-058c7b59d28f","Type":"ContainerDied","Data":"3e686da5c1694ef973dd392dd3f528227a2dceef9fc7eab8736b4d8a38ae8739"} Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.169352 4731 scope.go:117] "RemoveContainer" containerID="3e686da5c1694ef973dd392dd3f528227a2dceef9fc7eab8736b4d8a38ae8739" Nov 25 15:15:13 crc kubenswrapper[4731]: E1125 15:15:13.169675 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-549496ffb7-8jt8x_openstack(94fb83b0-f12c-45ee-bae8-058c7b59d28f)\"" pod="openstack/heat-cfnapi-549496ffb7-8jt8x" podUID="94fb83b0-f12c-45ee-bae8-058c7b59d28f" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.178231 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f6e0de1e-5e50-468c-a911-41b1c3e7261a","Type":"ContainerStarted","Data":"db796f61bd9549ab49b9fce46a6ff6a0527208381080f0fe3ed5930e2ea2b0dd"} Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.181283 4731 generic.go:334] "Generic (PLEG): container finished" podID="a9e3145a-f6c6-47d6-9241-2a919c8a7ac8" containerID="f503733095a6ce906ac83113d01fec3983d399e5e83b0fa8bbc54362d36b7399" exitCode=1 Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.181347 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69fd4bd965-vmt4d" event={"ID":"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8","Type":"ContainerDied","Data":"f503733095a6ce906ac83113d01fec3983d399e5e83b0fa8bbc54362d36b7399"} Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.182119 4731 scope.go:117] "RemoveContainer" containerID="f503733095a6ce906ac83113d01fec3983d399e5e83b0fa8bbc54362d36b7399" Nov 25 15:15:13 crc kubenswrapper[4731]: E1125 15:15:13.182399 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-69fd4bd965-vmt4d_openstack(a9e3145a-f6c6-47d6-9241-2a919c8a7ac8)\"" pod="openstack/heat-api-69fd4bd965-vmt4d" podUID="a9e3145a-f6c6-47d6-9241-2a919c8a7ac8" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.193646 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79f1ac0d-5b42-4d61-b552-9f6675fee139","Type":"ContainerStarted","Data":"9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59"} Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.193682 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.195523 4731 scope.go:117] "RemoveContainer" containerID="f3ca23f9c6ee73d1619cbe1828ef32e5428ec56700c7e26a3b082db253c73932" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.230146 4731 scope.go:117] "RemoveContainer" containerID="699cdda3f9173602e313c50437b2b7f372b5b88d2a6a62475d9d7c8d499abfda" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.230933 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-vksdl"] Nov 25 15:15:13 crc kubenswrapper[4731]: E1125 15:15:13.233380 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"699cdda3f9173602e313c50437b2b7f372b5b88d2a6a62475d9d7c8d499abfda\": container with ID starting with 699cdda3f9173602e313c50437b2b7f372b5b88d2a6a62475d9d7c8d499abfda not found: ID does not exist" containerID="699cdda3f9173602e313c50437b2b7f372b5b88d2a6a62475d9d7c8d499abfda" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.233410 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"699cdda3f9173602e313c50437b2b7f372b5b88d2a6a62475d9d7c8d499abfda"} err="failed to get container status \"699cdda3f9173602e313c50437b2b7f372b5b88d2a6a62475d9d7c8d499abfda\": rpc error: code = NotFound desc = could not find container \"699cdda3f9173602e313c50437b2b7f372b5b88d2a6a62475d9d7c8d499abfda\": container with ID starting with 699cdda3f9173602e313c50437b2b7f372b5b88d2a6a62475d9d7c8d499abfda not found: ID does not exist" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.233429 4731 scope.go:117] "RemoveContainer" containerID="f3ca23f9c6ee73d1619cbe1828ef32e5428ec56700c7e26a3b082db253c73932" Nov 25 15:15:13 crc kubenswrapper[4731]: E1125 15:15:13.233697 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3ca23f9c6ee73d1619cbe1828ef32e5428ec56700c7e26a3b082db253c73932\": container with ID starting with f3ca23f9c6ee73d1619cbe1828ef32e5428ec56700c7e26a3b082db253c73932 not found: ID does not exist" containerID="f3ca23f9c6ee73d1619cbe1828ef32e5428ec56700c7e26a3b082db253c73932" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.233733 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3ca23f9c6ee73d1619cbe1828ef32e5428ec56700c7e26a3b082db253c73932"} err="failed to get container status \"f3ca23f9c6ee73d1619cbe1828ef32e5428ec56700c7e26a3b082db253c73932\": rpc error: code = NotFound desc = could not find container \"f3ca23f9c6ee73d1619cbe1828ef32e5428ec56700c7e26a3b082db253c73932\": container with ID starting with f3ca23f9c6ee73d1619cbe1828ef32e5428ec56700c7e26a3b082db253c73932 not found: ID does not exist" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.233760 4731 scope.go:117] "RemoveContainer" containerID="fa2fe57adf96583667c2e765fdf52ee1d68ea2f56495a0f45f5cbc53a43ca6a1" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.248613 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-vksdl"] Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.275985 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.283423588 podStartE2EDuration="9.275961309s" podCreationTimestamp="2025-11-25 15:15:04 +0000 UTC" firstStartedPulling="2025-11-25 15:15:06.413119146 +0000 UTC m=+1265.160253576" lastFinishedPulling="2025-11-25 15:15:12.405656857 +0000 UTC m=+1271.152791297" observedRunningTime="2025-11-25 15:15:13.250973789 +0000 UTC m=+1271.998108209" watchObservedRunningTime="2025-11-25 15:15:13.275961309 +0000 UTC m=+1272.023095739" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.287308 4731 scope.go:117] "RemoveContainer" containerID="1afab7ff5f420277172171948c6fc857eceb6778475b8444826234ef3bb845c3" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.415016 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.417900 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.457250 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:13 crc kubenswrapper[4731]: I1125 15:15:13.457296 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:14 crc kubenswrapper[4731]: I1125 15:15:14.160191 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:14 crc kubenswrapper[4731]: I1125 15:15:14.218169 4731 scope.go:117] "RemoveContainer" containerID="3e686da5c1694ef973dd392dd3f528227a2dceef9fc7eab8736b4d8a38ae8739" Nov 25 15:15:14 crc kubenswrapper[4731]: E1125 15:15:14.218483 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-549496ffb7-8jt8x_openstack(94fb83b0-f12c-45ee-bae8-058c7b59d28f)\"" pod="openstack/heat-cfnapi-549496ffb7-8jt8x" podUID="94fb83b0-f12c-45ee-bae8-058c7b59d28f" Nov 25 15:15:14 crc kubenswrapper[4731]: I1125 15:15:14.221582 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f6e0de1e-5e50-468c-a911-41b1c3e7261a","Type":"ContainerStarted","Data":"63b8bd9063332eb5b544d2b891fa7df25663452346ec576ddd09d586c097f2fa"} Nov 25 15:15:14 crc kubenswrapper[4731]: I1125 15:15:14.221636 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f6e0de1e-5e50-468c-a911-41b1c3e7261a","Type":"ContainerStarted","Data":"1b451004ef6eb998a631e850a72789533405bb9eb543f70466b61d2f9952177e"} Nov 25 15:15:14 crc kubenswrapper[4731]: I1125 15:15:14.225644 4731 scope.go:117] "RemoveContainer" containerID="f503733095a6ce906ac83113d01fec3983d399e5e83b0fa8bbc54362d36b7399" Nov 25 15:15:14 crc kubenswrapper[4731]: E1125 15:15:14.225849 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-69fd4bd965-vmt4d_openstack(a9e3145a-f6c6-47d6-9241-2a919c8a7ac8)\"" pod="openstack/heat-api-69fd4bd965-vmt4d" podUID="a9e3145a-f6c6-47d6-9241-2a919c8a7ac8" Nov 25 15:15:14 crc kubenswrapper[4731]: I1125 15:15:14.254999 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.254984594 podStartE2EDuration="3.254984594s" podCreationTimestamp="2025-11-25 15:15:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:15:14.250141252 +0000 UTC m=+1272.997275702" watchObservedRunningTime="2025-11-25 15:15:14.254984594 +0000 UTC m=+1273.002119024" Nov 25 15:15:14 crc kubenswrapper[4731]: I1125 15:15:14.823688 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bdd6185-ea4d-4507-b925-5f093fc33931" path="/var/lib/kubelet/pods/0bdd6185-ea4d-4507-b925-5f093fc33931/volumes" Nov 25 15:15:15 crc kubenswrapper[4731]: I1125 15:15:15.233141 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerName="ceilometer-central-agent" containerID="cri-o://0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b" gracePeriod=30 Nov 25 15:15:15 crc kubenswrapper[4731]: I1125 15:15:15.233340 4731 scope.go:117] "RemoveContainer" containerID="3e686da5c1694ef973dd392dd3f528227a2dceef9fc7eab8736b4d8a38ae8739" Nov 25 15:15:15 crc kubenswrapper[4731]: I1125 15:15:15.233420 4731 scope.go:117] "RemoveContainer" containerID="f503733095a6ce906ac83113d01fec3983d399e5e83b0fa8bbc54362d36b7399" Nov 25 15:15:15 crc kubenswrapper[4731]: E1125 15:15:15.233554 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-549496ffb7-8jt8x_openstack(94fb83b0-f12c-45ee-bae8-058c7b59d28f)\"" pod="openstack/heat-cfnapi-549496ffb7-8jt8x" podUID="94fb83b0-f12c-45ee-bae8-058c7b59d28f" Nov 25 15:15:15 crc kubenswrapper[4731]: I1125 15:15:15.233601 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerName="proxy-httpd" containerID="cri-o://9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59" gracePeriod=30 Nov 25 15:15:15 crc kubenswrapper[4731]: I1125 15:15:15.233674 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerName="sg-core" containerID="cri-o://32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc" gracePeriod=30 Nov 25 15:15:15 crc kubenswrapper[4731]: I1125 15:15:15.233695 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerName="ceilometer-notification-agent" containerID="cri-o://ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4" gracePeriod=30 Nov 25 15:15:15 crc kubenswrapper[4731]: E1125 15:15:15.233681 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-69fd4bd965-vmt4d_openstack(a9e3145a-f6c6-47d6-9241-2a919c8a7ac8)\"" pod="openstack/heat-api-69fd4bd965-vmt4d" podUID="a9e3145a-f6c6-47d6-9241-2a919c8a7ac8" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.063280 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.155698 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-combined-ca-bundle\") pod \"79f1ac0d-5b42-4d61-b552-9f6675fee139\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.155772 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-sg-core-conf-yaml\") pod \"79f1ac0d-5b42-4d61-b552-9f6675fee139\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.155961 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79f1ac0d-5b42-4d61-b552-9f6675fee139-run-httpd\") pod \"79f1ac0d-5b42-4d61-b552-9f6675fee139\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.156002 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kczz9\" (UniqueName: \"kubernetes.io/projected/79f1ac0d-5b42-4d61-b552-9f6675fee139-kube-api-access-kczz9\") pod \"79f1ac0d-5b42-4d61-b552-9f6675fee139\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.156048 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-config-data\") pod \"79f1ac0d-5b42-4d61-b552-9f6675fee139\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.156079 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-scripts\") pod \"79f1ac0d-5b42-4d61-b552-9f6675fee139\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.156110 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79f1ac0d-5b42-4d61-b552-9f6675fee139-log-httpd\") pod \"79f1ac0d-5b42-4d61-b552-9f6675fee139\" (UID: \"79f1ac0d-5b42-4d61-b552-9f6675fee139\") " Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.156770 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79f1ac0d-5b42-4d61-b552-9f6675fee139-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "79f1ac0d-5b42-4d61-b552-9f6675fee139" (UID: "79f1ac0d-5b42-4d61-b552-9f6675fee139"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.156959 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79f1ac0d-5b42-4d61-b552-9f6675fee139-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "79f1ac0d-5b42-4d61-b552-9f6675fee139" (UID: "79f1ac0d-5b42-4d61-b552-9f6675fee139"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.164157 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79f1ac0d-5b42-4d61-b552-9f6675fee139-kube-api-access-kczz9" (OuterVolumeSpecName: "kube-api-access-kczz9") pod "79f1ac0d-5b42-4d61-b552-9f6675fee139" (UID: "79f1ac0d-5b42-4d61-b552-9f6675fee139"). InnerVolumeSpecName "kube-api-access-kczz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.164885 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-scripts" (OuterVolumeSpecName: "scripts") pod "79f1ac0d-5b42-4d61-b552-9f6675fee139" (UID: "79f1ac0d-5b42-4d61-b552-9f6675fee139"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.196973 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "79f1ac0d-5b42-4d61-b552-9f6675fee139" (UID: "79f1ac0d-5b42-4d61-b552-9f6675fee139"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.258925 4731 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.258955 4731 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79f1ac0d-5b42-4d61-b552-9f6675fee139-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.258965 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kczz9\" (UniqueName: \"kubernetes.io/projected/79f1ac0d-5b42-4d61-b552-9f6675fee139-kube-api-access-kczz9\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.258975 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.258983 4731 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/79f1ac0d-5b42-4d61-b552-9f6675fee139-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.261962 4731 generic.go:334] "Generic (PLEG): container finished" podID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerID="9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59" exitCode=0 Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.262014 4731 generic.go:334] "Generic (PLEG): container finished" podID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerID="32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc" exitCode=2 Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.262022 4731 generic.go:334] "Generic (PLEG): container finished" podID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerID="ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4" exitCode=0 Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.262029 4731 generic.go:334] "Generic (PLEG): container finished" podID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerID="0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b" exitCode=0 Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.262048 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79f1ac0d-5b42-4d61-b552-9f6675fee139","Type":"ContainerDied","Data":"9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59"} Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.262096 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79f1ac0d-5b42-4d61-b552-9f6675fee139","Type":"ContainerDied","Data":"32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc"} Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.262108 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79f1ac0d-5b42-4d61-b552-9f6675fee139","Type":"ContainerDied","Data":"ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4"} Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.262117 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79f1ac0d-5b42-4d61-b552-9f6675fee139","Type":"ContainerDied","Data":"0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b"} Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.262128 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"79f1ac0d-5b42-4d61-b552-9f6675fee139","Type":"ContainerDied","Data":"7e6a2ba0364cab0925ecb238b108cf2914c8cd877948ca9d630be3d0b2cdf1d8"} Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.262142 4731 scope.go:117] "RemoveContainer" containerID="9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.262297 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.299801 4731 scope.go:117] "RemoveContainer" containerID="32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.311049 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79f1ac0d-5b42-4d61-b552-9f6675fee139" (UID: "79f1ac0d-5b42-4d61-b552-9f6675fee139"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.317738 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.317788 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.322199 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-config-data" (OuterVolumeSpecName: "config-data") pod "79f1ac0d-5b42-4d61-b552-9f6675fee139" (UID: "79f1ac0d-5b42-4d61-b552-9f6675fee139"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.350703 4731 scope.go:117] "RemoveContainer" containerID="ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.360343 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.360376 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79f1ac0d-5b42-4d61-b552-9f6675fee139-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.369568 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.374126 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.443203 4731 scope.go:117] "RemoveContainer" containerID="0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.467571 4731 scope.go:117] "RemoveContainer" containerID="9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59" Nov 25 15:15:16 crc kubenswrapper[4731]: E1125 15:15:16.468055 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59\": container with ID starting with 9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59 not found: ID does not exist" containerID="9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.468112 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59"} err="failed to get container status \"9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59\": rpc error: code = NotFound desc = could not find container \"9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59\": container with ID starting with 9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59 not found: ID does not exist" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.468146 4731 scope.go:117] "RemoveContainer" containerID="32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc" Nov 25 15:15:16 crc kubenswrapper[4731]: E1125 15:15:16.468499 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc\": container with ID starting with 32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc not found: ID does not exist" containerID="32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.468529 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc"} err="failed to get container status \"32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc\": rpc error: code = NotFound desc = could not find container \"32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc\": container with ID starting with 32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc not found: ID does not exist" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.468547 4731 scope.go:117] "RemoveContainer" containerID="ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4" Nov 25 15:15:16 crc kubenswrapper[4731]: E1125 15:15:16.468794 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4\": container with ID starting with ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4 not found: ID does not exist" containerID="ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.468834 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4"} err="failed to get container status \"ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4\": rpc error: code = NotFound desc = could not find container \"ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4\": container with ID starting with ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4 not found: ID does not exist" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.468847 4731 scope.go:117] "RemoveContainer" containerID="0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b" Nov 25 15:15:16 crc kubenswrapper[4731]: E1125 15:15:16.469041 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b\": container with ID starting with 0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b not found: ID does not exist" containerID="0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.469078 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b"} err="failed to get container status \"0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b\": rpc error: code = NotFound desc = could not find container \"0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b\": container with ID starting with 0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b not found: ID does not exist" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.469101 4731 scope.go:117] "RemoveContainer" containerID="9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.469340 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59"} err="failed to get container status \"9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59\": rpc error: code = NotFound desc = could not find container \"9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59\": container with ID starting with 9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59 not found: ID does not exist" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.469364 4731 scope.go:117] "RemoveContainer" containerID="32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.469552 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc"} err="failed to get container status \"32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc\": rpc error: code = NotFound desc = could not find container \"32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc\": container with ID starting with 32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc not found: ID does not exist" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.469571 4731 scope.go:117] "RemoveContainer" containerID="ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.469746 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4"} err="failed to get container status \"ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4\": rpc error: code = NotFound desc = could not find container \"ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4\": container with ID starting with ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4 not found: ID does not exist" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.469763 4731 scope.go:117] "RemoveContainer" containerID="0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.469959 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b"} err="failed to get container status \"0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b\": rpc error: code = NotFound desc = could not find container \"0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b\": container with ID starting with 0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b not found: ID does not exist" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.469987 4731 scope.go:117] "RemoveContainer" containerID="9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.470188 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59"} err="failed to get container status \"9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59\": rpc error: code = NotFound desc = could not find container \"9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59\": container with ID starting with 9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59 not found: ID does not exist" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.470210 4731 scope.go:117] "RemoveContainer" containerID="32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.470355 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc"} err="failed to get container status \"32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc\": rpc error: code = NotFound desc = could not find container \"32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc\": container with ID starting with 32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc not found: ID does not exist" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.470373 4731 scope.go:117] "RemoveContainer" containerID="ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.470521 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4"} err="failed to get container status \"ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4\": rpc error: code = NotFound desc = could not find container \"ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4\": container with ID starting with ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4 not found: ID does not exist" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.470538 4731 scope.go:117] "RemoveContainer" containerID="0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.470710 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b"} err="failed to get container status \"0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b\": rpc error: code = NotFound desc = could not find container \"0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b\": container with ID starting with 0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b not found: ID does not exist" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.470763 4731 scope.go:117] "RemoveContainer" containerID="9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.471006 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59"} err="failed to get container status \"9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59\": rpc error: code = NotFound desc = could not find container \"9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59\": container with ID starting with 9a41e35c64e07ef373ddff5aae440d934764d15a68e2067963f1834202693d59 not found: ID does not exist" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.471022 4731 scope.go:117] "RemoveContainer" containerID="32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.471182 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc"} err="failed to get container status \"32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc\": rpc error: code = NotFound desc = could not find container \"32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc\": container with ID starting with 32fdbe23060bf367fda3fa3059494917a78fbae617bbe0eacc95eb161bd674dc not found: ID does not exist" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.471203 4731 scope.go:117] "RemoveContainer" containerID="ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.471386 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4"} err="failed to get container status \"ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4\": rpc error: code = NotFound desc = could not find container \"ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4\": container with ID starting with ebb49b9f2f2b3f38b777439ad03ce2ede0ad0dee3774fbf167576f976e3238d4 not found: ID does not exist" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.471407 4731 scope.go:117] "RemoveContainer" containerID="0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.471601 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b"} err="failed to get container status \"0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b\": rpc error: code = NotFound desc = could not find container \"0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b\": container with ID starting with 0644dfae16e1d65ed3318bfe0e62c168e72e05eb0cf2ef3b2652192548d5a96b not found: ID does not exist" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.634787 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.654902 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.673095 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6z999"] Nov 25 15:15:16 crc kubenswrapper[4731]: E1125 15:15:16.673493 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerName="proxy-httpd" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.673508 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerName="proxy-httpd" Nov 25 15:15:16 crc kubenswrapper[4731]: E1125 15:15:16.673523 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerName="ceilometer-central-agent" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.673531 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerName="ceilometer-central-agent" Nov 25 15:15:16 crc kubenswrapper[4731]: E1125 15:15:16.673549 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerName="ceilometer-notification-agent" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.673555 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerName="ceilometer-notification-agent" Nov 25 15:15:16 crc kubenswrapper[4731]: E1125 15:15:16.673566 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bdd6185-ea4d-4507-b925-5f093fc33931" containerName="init" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.673572 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bdd6185-ea4d-4507-b925-5f093fc33931" containerName="init" Nov 25 15:15:16 crc kubenswrapper[4731]: E1125 15:15:16.673584 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bdd6185-ea4d-4507-b925-5f093fc33931" containerName="dnsmasq-dns" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.673590 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bdd6185-ea4d-4507-b925-5f093fc33931" containerName="dnsmasq-dns" Nov 25 15:15:16 crc kubenswrapper[4731]: E1125 15:15:16.673612 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerName="sg-core" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.673618 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerName="sg-core" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.673790 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerName="sg-core" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.673805 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerName="ceilometer-central-agent" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.673830 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bdd6185-ea4d-4507-b925-5f093fc33931" containerName="dnsmasq-dns" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.673838 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerName="proxy-httpd" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.673847 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" containerName="ceilometer-notification-agent" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.674683 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6z999" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.682359 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.682608 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.682908 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rqk2l" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.687317 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6z999"] Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.703555 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.705762 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.710569 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.710792 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.713631 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.826953 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79f1ac0d-5b42-4d61-b552-9f6675fee139" path="/var/lib/kubelet/pods/79f1ac0d-5b42-4d61-b552-9f6675fee139/volumes" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.875356 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xnh4\" (UniqueName: \"kubernetes.io/projected/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-kube-api-access-6xnh4\") pod \"nova-cell0-conductor-db-sync-6z999\" (UID: \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\") " pod="openstack/nova-cell0-conductor-db-sync-6z999" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.875436 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.875567 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-scripts\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.875716 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.875791 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4g6n6\" (UniqueName: \"kubernetes.io/projected/78288b66-9540-48dd-ade3-2803572152aa-kube-api-access-4g6n6\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.875867 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-scripts\") pod \"nova-cell0-conductor-db-sync-6z999\" (UID: \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\") " pod="openstack/nova-cell0-conductor-db-sync-6z999" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.875956 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78288b66-9540-48dd-ade3-2803572152aa-log-httpd\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.876027 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6z999\" (UID: \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\") " pod="openstack/nova-cell0-conductor-db-sync-6z999" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.876082 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-config-data\") pod \"nova-cell0-conductor-db-sync-6z999\" (UID: \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\") " pod="openstack/nova-cell0-conductor-db-sync-6z999" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.876132 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-config-data\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.876163 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78288b66-9540-48dd-ade3-2803572152aa-run-httpd\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.894336 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.976688 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xnh4\" (UniqueName: \"kubernetes.io/projected/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-kube-api-access-6xnh4\") pod \"nova-cell0-conductor-db-sync-6z999\" (UID: \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\") " pod="openstack/nova-cell0-conductor-db-sync-6z999" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.976790 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.976877 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-scripts\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.976931 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.976983 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4g6n6\" (UniqueName: \"kubernetes.io/projected/78288b66-9540-48dd-ade3-2803572152aa-kube-api-access-4g6n6\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.977012 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-scripts\") pod \"nova-cell0-conductor-db-sync-6z999\" (UID: \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\") " pod="openstack/nova-cell0-conductor-db-sync-6z999" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.977035 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78288b66-9540-48dd-ade3-2803572152aa-log-httpd\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.977069 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6z999\" (UID: \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\") " pod="openstack/nova-cell0-conductor-db-sync-6z999" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.977102 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-config-data\") pod \"nova-cell0-conductor-db-sync-6z999\" (UID: \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\") " pod="openstack/nova-cell0-conductor-db-sync-6z999" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.977129 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78288b66-9540-48dd-ade3-2803572152aa-run-httpd\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.977148 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-config-data\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.977491 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78288b66-9540-48dd-ade3-2803572152aa-log-httpd\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.978026 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78288b66-9540-48dd-ade3-2803572152aa-run-httpd\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.981299 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-config-data\") pod \"nova-cell0-conductor-db-sync-6z999\" (UID: \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\") " pod="openstack/nova-cell0-conductor-db-sync-6z999" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.981492 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.981526 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-scripts\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.981808 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-scripts\") pod \"nova-cell0-conductor-db-sync-6z999\" (UID: \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\") " pod="openstack/nova-cell0-conductor-db-sync-6z999" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.985243 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.985776 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-config-data\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.993629 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-6z999\" (UID: \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\") " pod="openstack/nova-cell0-conductor-db-sync-6z999" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.997044 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xnh4\" (UniqueName: \"kubernetes.io/projected/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-kube-api-access-6xnh4\") pod \"nova-cell0-conductor-db-sync-6z999\" (UID: \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\") " pod="openstack/nova-cell0-conductor-db-sync-6z999" Nov 25 15:15:16 crc kubenswrapper[4731]: I1125 15:15:16.998531 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4g6n6\" (UniqueName: \"kubernetes.io/projected/78288b66-9540-48dd-ade3-2803572152aa-kube-api-access-4g6n6\") pod \"ceilometer-0\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " pod="openstack/ceilometer-0" Nov 25 15:15:17 crc kubenswrapper[4731]: I1125 15:15:17.018097 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6z999" Nov 25 15:15:17 crc kubenswrapper[4731]: I1125 15:15:17.033703 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:15:17 crc kubenswrapper[4731]: I1125 15:15:17.338278 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:17 crc kubenswrapper[4731]: I1125 15:15:17.338641 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:17 crc kubenswrapper[4731]: I1125 15:15:17.607545 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6z999"] Nov 25 15:15:17 crc kubenswrapper[4731]: I1125 15:15:17.618137 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:17 crc kubenswrapper[4731]: W1125 15:15:17.621863 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78288b66_9540_48dd_ade3_2803572152aa.slice/crio-d6f5376bd8c594641686b78d76c879b773f4ea3e7120ae2235981d929ee6c6dd WatchSource:0}: Error finding container d6f5376bd8c594641686b78d76c879b773f4ea3e7120ae2235981d929ee6c6dd: Status 404 returned error can't find the container with id d6f5376bd8c594641686b78d76c879b773f4ea3e7120ae2235981d929ee6c6dd Nov 25 15:15:18 crc kubenswrapper[4731]: I1125 15:15:18.166632 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-cdf8597bf-t8vdc" Nov 25 15:15:18 crc kubenswrapper[4731]: I1125 15:15:18.249972 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-69fd4bd965-vmt4d"] Nov 25 15:15:18 crc kubenswrapper[4731]: I1125 15:15:18.282905 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-8cf475f5-wpg2b" Nov 25 15:15:18 crc kubenswrapper[4731]: I1125 15:15:18.396392 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-549496ffb7-8jt8x"] Nov 25 15:15:18 crc kubenswrapper[4731]: I1125 15:15:18.409336 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6z999" event={"ID":"ceeba86d-0fd6-4a88-85ec-1ed6e079d413","Type":"ContainerStarted","Data":"e46b24e68051e30d5500aca8c126e80f137ff1e827a3ec33c32cf4874c2dd37d"} Nov 25 15:15:18 crc kubenswrapper[4731]: I1125 15:15:18.436000 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78288b66-9540-48dd-ade3-2803572152aa","Type":"ContainerStarted","Data":"d6f5376bd8c594641686b78d76c879b773f4ea3e7120ae2235981d929ee6c6dd"} Nov 25 15:15:18 crc kubenswrapper[4731]: I1125 15:15:18.928563 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:18 crc kubenswrapper[4731]: I1125 15:15:18.994500 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.023520 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-config-data\") pod \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\" (UID: \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\") " Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.023568 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-config-data-custom\") pod \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\" (UID: \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\") " Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.023596 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-combined-ca-bundle\") pod \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\" (UID: \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\") " Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.023683 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v64fc\" (UniqueName: \"kubernetes.io/projected/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-kube-api-access-v64fc\") pod \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\" (UID: \"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8\") " Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.028423 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a9e3145a-f6c6-47d6-9241-2a919c8a7ac8" (UID: "a9e3145a-f6c6-47d6-9241-2a919c8a7ac8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.029119 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-kube-api-access-v64fc" (OuterVolumeSpecName: "kube-api-access-v64fc") pod "a9e3145a-f6c6-47d6-9241-2a919c8a7ac8" (UID: "a9e3145a-f6c6-47d6-9241-2a919c8a7ac8"). InnerVolumeSpecName "kube-api-access-v64fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.039497 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.080406 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9e3145a-f6c6-47d6-9241-2a919c8a7ac8" (UID: "a9e3145a-f6c6-47d6-9241-2a919c8a7ac8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.105173 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-config-data" (OuterVolumeSpecName: "config-data") pod "a9e3145a-f6c6-47d6-9241-2a919c8a7ac8" (UID: "a9e3145a-f6c6-47d6-9241-2a919c8a7ac8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.125846 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms6n2\" (UniqueName: \"kubernetes.io/projected/94fb83b0-f12c-45ee-bae8-058c7b59d28f-kube-api-access-ms6n2\") pod \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\" (UID: \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\") " Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.125886 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-combined-ca-bundle\") pod \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\" (UID: \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\") " Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.125952 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-config-data-custom\") pod \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\" (UID: \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\") " Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.126063 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-config-data\") pod \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\" (UID: \"94fb83b0-f12c-45ee-bae8-058c7b59d28f\") " Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.126459 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.126475 4731 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.126486 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.126495 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v64fc\" (UniqueName: \"kubernetes.io/projected/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8-kube-api-access-v64fc\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.134050 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94fb83b0-f12c-45ee-bae8-058c7b59d28f-kube-api-access-ms6n2" (OuterVolumeSpecName: "kube-api-access-ms6n2") pod "94fb83b0-f12c-45ee-bae8-058c7b59d28f" (UID: "94fb83b0-f12c-45ee-bae8-058c7b59d28f"). InnerVolumeSpecName "kube-api-access-ms6n2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.145107 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "94fb83b0-f12c-45ee-bae8-058c7b59d28f" (UID: "94fb83b0-f12c-45ee-bae8-058c7b59d28f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.187664 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-config-data" (OuterVolumeSpecName: "config-data") pod "94fb83b0-f12c-45ee-bae8-058c7b59d28f" (UID: "94fb83b0-f12c-45ee-bae8-058c7b59d28f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.201444 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94fb83b0-f12c-45ee-bae8-058c7b59d28f" (UID: "94fb83b0-f12c-45ee-bae8-058c7b59d28f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.231201 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.231245 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.231258 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms6n2\" (UniqueName: \"kubernetes.io/projected/94fb83b0-f12c-45ee-bae8-058c7b59d28f-kube-api-access-ms6n2\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.231271 4731 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/94fb83b0-f12c-45ee-bae8-058c7b59d28f-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.464946 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-549496ffb7-8jt8x" event={"ID":"94fb83b0-f12c-45ee-bae8-058c7b59d28f","Type":"ContainerDied","Data":"5964d8a0ce2cd130598866ace0688d3f4462ec9acd3b266109a08bda1c5e4581"} Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.465003 4731 scope.go:117] "RemoveContainer" containerID="3e686da5c1694ef973dd392dd3f528227a2dceef9fc7eab8736b4d8a38ae8739" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.465144 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-549496ffb7-8jt8x" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.474770 4731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.474802 4731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.475874 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-69fd4bd965-vmt4d" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.478479 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-69fd4bd965-vmt4d" event={"ID":"a9e3145a-f6c6-47d6-9241-2a919c8a7ac8","Type":"ContainerDied","Data":"d2c66e2d1b611c46a7ee53cce6a5561a9f3ebaa17c6f5db7c91633c6edab86b8"} Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.516128 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-549496ffb7-8jt8x"] Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.524944 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-549496ffb7-8jt8x"] Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.535858 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-69fd4bd965-vmt4d"] Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.544246 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.549043 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-69fd4bd965-vmt4d"] Nov 25 15:15:19 crc kubenswrapper[4731]: I1125 15:15:19.976265 4731 scope.go:117] "RemoveContainer" containerID="f503733095a6ce906ac83113d01fec3983d399e5e83b0fa8bbc54362d36b7399" Nov 25 15:15:20 crc kubenswrapper[4731]: I1125 15:15:20.440524 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:20 crc kubenswrapper[4731]: I1125 15:15:20.441739 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Nov 25 15:15:20 crc kubenswrapper[4731]: I1125 15:15:20.485566 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78288b66-9540-48dd-ade3-2803572152aa","Type":"ContainerStarted","Data":"eeaadc579214b29c11e2558e57d40119c8b69ab2e4fea672e7f22e19ae55e684"} Nov 25 15:15:20 crc kubenswrapper[4731]: I1125 15:15:20.823750 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94fb83b0-f12c-45ee-bae8-058c7b59d28f" path="/var/lib/kubelet/pods/94fb83b0-f12c-45ee-bae8-058c7b59d28f/volumes" Nov 25 15:15:20 crc kubenswrapper[4731]: I1125 15:15:20.825289 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9e3145a-f6c6-47d6-9241-2a919c8a7ac8" path="/var/lib/kubelet/pods/a9e3145a-f6c6-47d6-9241-2a919c8a7ac8/volumes" Nov 25 15:15:21 crc kubenswrapper[4731]: I1125 15:15:21.505889 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78288b66-9540-48dd-ade3-2803572152aa","Type":"ContainerStarted","Data":"828a8305fd86b68e9ff1b98db3f2c1276cf5946587e14a526d7f431cb0c4851e"} Nov 25 15:15:21 crc kubenswrapper[4731]: I1125 15:15:21.773156 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 15:15:21 crc kubenswrapper[4731]: I1125 15:15:21.773367 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Nov 25 15:15:21 crc kubenswrapper[4731]: I1125 15:15:21.814495 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 15:15:21 crc kubenswrapper[4731]: I1125 15:15:21.819396 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Nov 25 15:15:22 crc kubenswrapper[4731]: I1125 15:15:22.524919 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78288b66-9540-48dd-ade3-2803572152aa","Type":"ContainerStarted","Data":"b9f6295c7039316f3ccd467038e06fb7ebf1b2759c764118e92ebb0fdf3fa919"} Nov 25 15:15:22 crc kubenswrapper[4731]: I1125 15:15:22.525274 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 15:15:22 crc kubenswrapper[4731]: I1125 15:15:22.525288 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Nov 25 15:15:23 crc kubenswrapper[4731]: I1125 15:15:23.146030 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:23 crc kubenswrapper[4731]: I1125 15:15:23.472172 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-574677c86-dzlj9" Nov 25 15:15:23 crc kubenswrapper[4731]: I1125 15:15:23.544431 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5f66fcd449-6b7kw"] Nov 25 15:15:23 crc kubenswrapper[4731]: I1125 15:15:23.544960 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5f66fcd449-6b7kw" podUID="069c92d4-3268-4960-8aaf-e071a962a219" containerName="heat-engine" containerID="cri-o://f9eaa4336f93531e33070a847739530af2f2f0bb6ff673bc0e7843ca301bb281" gracePeriod=60 Nov 25 15:15:24 crc kubenswrapper[4731]: I1125 15:15:24.809707 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 15:15:24 crc kubenswrapper[4731]: I1125 15:15:24.810139 4731 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 25 15:15:24 crc kubenswrapper[4731]: I1125 15:15:24.823916 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Nov 25 15:15:26 crc kubenswrapper[4731]: E1125 15:15:26.867791 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9eaa4336f93531e33070a847739530af2f2f0bb6ff673bc0e7843ca301bb281" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 25 15:15:26 crc kubenswrapper[4731]: E1125 15:15:26.871052 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9eaa4336f93531e33070a847739530af2f2f0bb6ff673bc0e7843ca301bb281" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 25 15:15:26 crc kubenswrapper[4731]: E1125 15:15:26.877934 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f9eaa4336f93531e33070a847739530af2f2f0bb6ff673bc0e7843ca301bb281" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Nov 25 15:15:26 crc kubenswrapper[4731]: E1125 15:15:26.878008 4731 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5f66fcd449-6b7kw" podUID="069c92d4-3268-4960-8aaf-e071a962a219" containerName="heat-engine" Nov 25 15:15:32 crc kubenswrapper[4731]: E1125 15:15:32.879018 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Nov 25 15:15:32 crc kubenswrapper[4731]: E1125 15:15:32.879702 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xnh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-6z999_openstack(ceeba86d-0fd6-4a88-85ec-1ed6e079d413): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Nov 25 15:15:32 crc kubenswrapper[4731]: E1125 15:15:32.880903 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-6z999" podUID="ceeba86d-0fd6-4a88-85ec-1ed6e079d413" Nov 25 15:15:33 crc kubenswrapper[4731]: I1125 15:15:33.661110 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78288b66-9540-48dd-ade3-2803572152aa" containerName="ceilometer-central-agent" containerID="cri-o://eeaadc579214b29c11e2558e57d40119c8b69ab2e4fea672e7f22e19ae55e684" gracePeriod=30 Nov 25 15:15:33 crc kubenswrapper[4731]: I1125 15:15:33.661725 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78288b66-9540-48dd-ade3-2803572152aa","Type":"ContainerStarted","Data":"e577c683896ade21424b1ce765b15d8267020a883c15b34b7945df10ce935d25"} Nov 25 15:15:33 crc kubenswrapper[4731]: I1125 15:15:33.661943 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:15:33 crc kubenswrapper[4731]: I1125 15:15:33.661964 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78288b66-9540-48dd-ade3-2803572152aa" containerName="sg-core" containerID="cri-o://b9f6295c7039316f3ccd467038e06fb7ebf1b2759c764118e92ebb0fdf3fa919" gracePeriod=30 Nov 25 15:15:33 crc kubenswrapper[4731]: I1125 15:15:33.661990 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78288b66-9540-48dd-ade3-2803572152aa" containerName="ceilometer-notification-agent" containerID="cri-o://828a8305fd86b68e9ff1b98db3f2c1276cf5946587e14a526d7f431cb0c4851e" gracePeriod=30 Nov 25 15:15:33 crc kubenswrapper[4731]: I1125 15:15:33.662064 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="78288b66-9540-48dd-ade3-2803572152aa" containerName="proxy-httpd" containerID="cri-o://e577c683896ade21424b1ce765b15d8267020a883c15b34b7945df10ce935d25" gracePeriod=30 Nov 25 15:15:33 crc kubenswrapper[4731]: E1125 15:15:33.664273 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-6z999" podUID="ceeba86d-0fd6-4a88-85ec-1ed6e079d413" Nov 25 15:15:33 crc kubenswrapper[4731]: I1125 15:15:33.702686 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.415305641 podStartE2EDuration="17.702665896s" podCreationTimestamp="2025-11-25 15:15:16 +0000 UTC" firstStartedPulling="2025-11-25 15:15:17.630432373 +0000 UTC m=+1276.377566803" lastFinishedPulling="2025-11-25 15:15:32.917792628 +0000 UTC m=+1291.664927058" observedRunningTime="2025-11-25 15:15:33.702358048 +0000 UTC m=+1292.449492478" watchObservedRunningTime="2025-11-25 15:15:33.702665896 +0000 UTC m=+1292.449800336" Nov 25 15:15:33 crc kubenswrapper[4731]: I1125 15:15:33.990277 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="5d30f6bd-f2f7-4edb-a26a-a63d92e2414c" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.157:3000/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:15:34 crc kubenswrapper[4731]: I1125 15:15:34.670895 4731 generic.go:334] "Generic (PLEG): container finished" podID="78288b66-9540-48dd-ade3-2803572152aa" containerID="e577c683896ade21424b1ce765b15d8267020a883c15b34b7945df10ce935d25" exitCode=0 Nov 25 15:15:34 crc kubenswrapper[4731]: I1125 15:15:34.671225 4731 generic.go:334] "Generic (PLEG): container finished" podID="78288b66-9540-48dd-ade3-2803572152aa" containerID="b9f6295c7039316f3ccd467038e06fb7ebf1b2759c764118e92ebb0fdf3fa919" exitCode=2 Nov 25 15:15:34 crc kubenswrapper[4731]: I1125 15:15:34.670977 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78288b66-9540-48dd-ade3-2803572152aa","Type":"ContainerDied","Data":"e577c683896ade21424b1ce765b15d8267020a883c15b34b7945df10ce935d25"} Nov 25 15:15:34 crc kubenswrapper[4731]: I1125 15:15:34.671297 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78288b66-9540-48dd-ade3-2803572152aa","Type":"ContainerDied","Data":"b9f6295c7039316f3ccd467038e06fb7ebf1b2759c764118e92ebb0fdf3fa919"} Nov 25 15:15:34 crc kubenswrapper[4731]: I1125 15:15:34.672941 4731 generic.go:334] "Generic (PLEG): container finished" podID="069c92d4-3268-4960-8aaf-e071a962a219" containerID="f9eaa4336f93531e33070a847739530af2f2f0bb6ff673bc0e7843ca301bb281" exitCode=0 Nov 25 15:15:34 crc kubenswrapper[4731]: I1125 15:15:34.672985 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f66fcd449-6b7kw" event={"ID":"069c92d4-3268-4960-8aaf-e071a962a219","Type":"ContainerDied","Data":"f9eaa4336f93531e33070a847739530af2f2f0bb6ff673bc0e7843ca301bb281"} Nov 25 15:15:34 crc kubenswrapper[4731]: I1125 15:15:34.804897 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:15:34 crc kubenswrapper[4731]: I1125 15:15:34.949903 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-combined-ca-bundle\") pod \"069c92d4-3268-4960-8aaf-e071a962a219\" (UID: \"069c92d4-3268-4960-8aaf-e071a962a219\") " Nov 25 15:15:34 crc kubenswrapper[4731]: I1125 15:15:34.949957 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-config-data\") pod \"069c92d4-3268-4960-8aaf-e071a962a219\" (UID: \"069c92d4-3268-4960-8aaf-e071a962a219\") " Nov 25 15:15:34 crc kubenswrapper[4731]: I1125 15:15:34.950775 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-config-data-custom\") pod \"069c92d4-3268-4960-8aaf-e071a962a219\" (UID: \"069c92d4-3268-4960-8aaf-e071a962a219\") " Nov 25 15:15:34 crc kubenswrapper[4731]: I1125 15:15:34.950851 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwdsj\" (UniqueName: \"kubernetes.io/projected/069c92d4-3268-4960-8aaf-e071a962a219-kube-api-access-kwdsj\") pod \"069c92d4-3268-4960-8aaf-e071a962a219\" (UID: \"069c92d4-3268-4960-8aaf-e071a962a219\") " Nov 25 15:15:34 crc kubenswrapper[4731]: I1125 15:15:34.955357 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "069c92d4-3268-4960-8aaf-e071a962a219" (UID: "069c92d4-3268-4960-8aaf-e071a962a219"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:34 crc kubenswrapper[4731]: I1125 15:15:34.971657 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/069c92d4-3268-4960-8aaf-e071a962a219-kube-api-access-kwdsj" (OuterVolumeSpecName: "kube-api-access-kwdsj") pod "069c92d4-3268-4960-8aaf-e071a962a219" (UID: "069c92d4-3268-4960-8aaf-e071a962a219"). InnerVolumeSpecName "kube-api-access-kwdsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:34 crc kubenswrapper[4731]: I1125 15:15:34.991054 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "069c92d4-3268-4960-8aaf-e071a962a219" (UID: "069c92d4-3268-4960-8aaf-e071a962a219"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:35 crc kubenswrapper[4731]: I1125 15:15:35.014604 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-config-data" (OuterVolumeSpecName: "config-data") pod "069c92d4-3268-4960-8aaf-e071a962a219" (UID: "069c92d4-3268-4960-8aaf-e071a962a219"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:35 crc kubenswrapper[4731]: I1125 15:15:35.053372 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:35 crc kubenswrapper[4731]: I1125 15:15:35.053417 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:35 crc kubenswrapper[4731]: I1125 15:15:35.053434 4731 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/069c92d4-3268-4960-8aaf-e071a962a219-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:35 crc kubenswrapper[4731]: I1125 15:15:35.053450 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwdsj\" (UniqueName: \"kubernetes.io/projected/069c92d4-3268-4960-8aaf-e071a962a219-kube-api-access-kwdsj\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:35 crc kubenswrapper[4731]: I1125 15:15:35.684138 4731 generic.go:334] "Generic (PLEG): container finished" podID="78288b66-9540-48dd-ade3-2803572152aa" containerID="828a8305fd86b68e9ff1b98db3f2c1276cf5946587e14a526d7f431cb0c4851e" exitCode=0 Nov 25 15:15:35 crc kubenswrapper[4731]: I1125 15:15:35.684211 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78288b66-9540-48dd-ade3-2803572152aa","Type":"ContainerDied","Data":"828a8305fd86b68e9ff1b98db3f2c1276cf5946587e14a526d7f431cb0c4851e"} Nov 25 15:15:35 crc kubenswrapper[4731]: I1125 15:15:35.686621 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5f66fcd449-6b7kw" event={"ID":"069c92d4-3268-4960-8aaf-e071a962a219","Type":"ContainerDied","Data":"ddeabf12920c6acd18450d9af92cefd455d227019f5b2a0d4650a4651794a77c"} Nov 25 15:15:35 crc kubenswrapper[4731]: I1125 15:15:35.686681 4731 scope.go:117] "RemoveContainer" containerID="f9eaa4336f93531e33070a847739530af2f2f0bb6ff673bc0e7843ca301bb281" Nov 25 15:15:35 crc kubenswrapper[4731]: I1125 15:15:35.686806 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5f66fcd449-6b7kw" Nov 25 15:15:35 crc kubenswrapper[4731]: I1125 15:15:35.716083 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5f66fcd449-6b7kw"] Nov 25 15:15:35 crc kubenswrapper[4731]: I1125 15:15:35.726118 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5f66fcd449-6b7kw"] Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.699692 4731 generic.go:334] "Generic (PLEG): container finished" podID="78288b66-9540-48dd-ade3-2803572152aa" containerID="eeaadc579214b29c11e2558e57d40119c8b69ab2e4fea672e7f22e19ae55e684" exitCode=0 Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.699766 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78288b66-9540-48dd-ade3-2803572152aa","Type":"ContainerDied","Data":"eeaadc579214b29c11e2558e57d40119c8b69ab2e4fea672e7f22e19ae55e684"} Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.700256 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"78288b66-9540-48dd-ade3-2803572152aa","Type":"ContainerDied","Data":"d6f5376bd8c594641686b78d76c879b773f4ea3e7120ae2235981d929ee6c6dd"} Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.700302 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6f5376bd8c594641686b78d76c879b773f4ea3e7120ae2235981d929ee6c6dd" Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.766362 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.852384 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="069c92d4-3268-4960-8aaf-e071a962a219" path="/var/lib/kubelet/pods/069c92d4-3268-4960-8aaf-e071a962a219/volumes" Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.889492 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78288b66-9540-48dd-ade3-2803572152aa-log-httpd\") pod \"78288b66-9540-48dd-ade3-2803572152aa\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.889608 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-scripts\") pod \"78288b66-9540-48dd-ade3-2803572152aa\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.889643 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78288b66-9540-48dd-ade3-2803572152aa-run-httpd\") pod \"78288b66-9540-48dd-ade3-2803572152aa\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.889667 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-sg-core-conf-yaml\") pod \"78288b66-9540-48dd-ade3-2803572152aa\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.889689 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-config-data\") pod \"78288b66-9540-48dd-ade3-2803572152aa\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.890225 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78288b66-9540-48dd-ade3-2803572152aa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "78288b66-9540-48dd-ade3-2803572152aa" (UID: "78288b66-9540-48dd-ade3-2803572152aa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.890376 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/78288b66-9540-48dd-ade3-2803572152aa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "78288b66-9540-48dd-ade3-2803572152aa" (UID: "78288b66-9540-48dd-ade3-2803572152aa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.890484 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-combined-ca-bundle\") pod \"78288b66-9540-48dd-ade3-2803572152aa\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.890524 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g6n6\" (UniqueName: \"kubernetes.io/projected/78288b66-9540-48dd-ade3-2803572152aa-kube-api-access-4g6n6\") pod \"78288b66-9540-48dd-ade3-2803572152aa\" (UID: \"78288b66-9540-48dd-ade3-2803572152aa\") " Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.891070 4731 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78288b66-9540-48dd-ade3-2803572152aa-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.891094 4731 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/78288b66-9540-48dd-ade3-2803572152aa-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.895259 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78288b66-9540-48dd-ade3-2803572152aa-kube-api-access-4g6n6" (OuterVolumeSpecName: "kube-api-access-4g6n6") pod "78288b66-9540-48dd-ade3-2803572152aa" (UID: "78288b66-9540-48dd-ade3-2803572152aa"). InnerVolumeSpecName "kube-api-access-4g6n6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.895350 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-scripts" (OuterVolumeSpecName: "scripts") pod "78288b66-9540-48dd-ade3-2803572152aa" (UID: "78288b66-9540-48dd-ade3-2803572152aa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.926068 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "78288b66-9540-48dd-ade3-2803572152aa" (UID: "78288b66-9540-48dd-ade3-2803572152aa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.963744 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "78288b66-9540-48dd-ade3-2803572152aa" (UID: "78288b66-9540-48dd-ade3-2803572152aa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.994467 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.994509 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4g6n6\" (UniqueName: \"kubernetes.io/projected/78288b66-9540-48dd-ade3-2803572152aa-kube-api-access-4g6n6\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.994519 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:36 crc kubenswrapper[4731]: I1125 15:15:36.994527 4731 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.012892 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-config-data" (OuterVolumeSpecName: "config-data") pod "78288b66-9540-48dd-ade3-2803572152aa" (UID: "78288b66-9540-48dd-ade3-2803572152aa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.096614 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78288b66-9540-48dd-ade3-2803572152aa-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.711841 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.745143 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.762060 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.803772 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:37 crc kubenswrapper[4731]: E1125 15:15:37.804243 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e3145a-f6c6-47d6-9241-2a919c8a7ac8" containerName="heat-api" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804263 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e3145a-f6c6-47d6-9241-2a919c8a7ac8" containerName="heat-api" Nov 25 15:15:37 crc kubenswrapper[4731]: E1125 15:15:37.804273 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9e3145a-f6c6-47d6-9241-2a919c8a7ac8" containerName="heat-api" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804280 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9e3145a-f6c6-47d6-9241-2a919c8a7ac8" containerName="heat-api" Nov 25 15:15:37 crc kubenswrapper[4731]: E1125 15:15:37.804292 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78288b66-9540-48dd-ade3-2803572152aa" containerName="ceilometer-notification-agent" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804298 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="78288b66-9540-48dd-ade3-2803572152aa" containerName="ceilometer-notification-agent" Nov 25 15:15:37 crc kubenswrapper[4731]: E1125 15:15:37.804309 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94fb83b0-f12c-45ee-bae8-058c7b59d28f" containerName="heat-cfnapi" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804315 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="94fb83b0-f12c-45ee-bae8-058c7b59d28f" containerName="heat-cfnapi" Nov 25 15:15:37 crc kubenswrapper[4731]: E1125 15:15:37.804323 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="069c92d4-3268-4960-8aaf-e071a962a219" containerName="heat-engine" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804328 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="069c92d4-3268-4960-8aaf-e071a962a219" containerName="heat-engine" Nov 25 15:15:37 crc kubenswrapper[4731]: E1125 15:15:37.804343 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78288b66-9540-48dd-ade3-2803572152aa" containerName="proxy-httpd" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804349 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="78288b66-9540-48dd-ade3-2803572152aa" containerName="proxy-httpd" Nov 25 15:15:37 crc kubenswrapper[4731]: E1125 15:15:37.804370 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78288b66-9540-48dd-ade3-2803572152aa" containerName="sg-core" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804377 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="78288b66-9540-48dd-ade3-2803572152aa" containerName="sg-core" Nov 25 15:15:37 crc kubenswrapper[4731]: E1125 15:15:37.804388 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94fb83b0-f12c-45ee-bae8-058c7b59d28f" containerName="heat-cfnapi" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804394 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="94fb83b0-f12c-45ee-bae8-058c7b59d28f" containerName="heat-cfnapi" Nov 25 15:15:37 crc kubenswrapper[4731]: E1125 15:15:37.804412 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78288b66-9540-48dd-ade3-2803572152aa" containerName="ceilometer-central-agent" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804418 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="78288b66-9540-48dd-ade3-2803572152aa" containerName="ceilometer-central-agent" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804594 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="78288b66-9540-48dd-ade3-2803572152aa" containerName="sg-core" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804606 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="78288b66-9540-48dd-ade3-2803572152aa" containerName="ceilometer-central-agent" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804619 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="78288b66-9540-48dd-ade3-2803572152aa" containerName="proxy-httpd" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804626 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9e3145a-f6c6-47d6-9241-2a919c8a7ac8" containerName="heat-api" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804640 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="94fb83b0-f12c-45ee-bae8-058c7b59d28f" containerName="heat-cfnapi" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804650 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="78288b66-9540-48dd-ade3-2803572152aa" containerName="ceilometer-notification-agent" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804661 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="94fb83b0-f12c-45ee-bae8-058c7b59d28f" containerName="heat-cfnapi" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.804670 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="069c92d4-3268-4960-8aaf-e071a962a219" containerName="heat-engine" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.805026 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9e3145a-f6c6-47d6-9241-2a919c8a7ac8" containerName="heat-api" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.806390 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.809358 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.809562 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.810206 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.911945 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-config-data\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.912008 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7304c631-78cb-4360-97f2-8349fa34b009-run-httpd\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.912043 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7304c631-78cb-4360-97f2-8349fa34b009-log-httpd\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.912239 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-scripts\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.912329 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.912501 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnrg2\" (UniqueName: \"kubernetes.io/projected/7304c631-78cb-4360-97f2-8349fa34b009-kube-api-access-qnrg2\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:37 crc kubenswrapper[4731]: I1125 15:15:37.912554 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.014260 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-config-data\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.014317 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7304c631-78cb-4360-97f2-8349fa34b009-run-httpd\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.014350 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7304c631-78cb-4360-97f2-8349fa34b009-log-httpd\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.014393 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-scripts\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.014423 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.014486 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnrg2\" (UniqueName: \"kubernetes.io/projected/7304c631-78cb-4360-97f2-8349fa34b009-kube-api-access-qnrg2\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.014516 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.015718 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7304c631-78cb-4360-97f2-8349fa34b009-log-httpd\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.016275 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7304c631-78cb-4360-97f2-8349fa34b009-run-httpd\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.020065 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.020736 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.022995 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-config-data\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.032453 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-scripts\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.035960 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnrg2\" (UniqueName: \"kubernetes.io/projected/7304c631-78cb-4360-97f2-8349fa34b009-kube-api-access-qnrg2\") pod \"ceilometer-0\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " pod="openstack/ceilometer-0" Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.134344 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.743391 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:38 crc kubenswrapper[4731]: I1125 15:15:38.823124 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78288b66-9540-48dd-ade3-2803572152aa" path="/var/lib/kubelet/pods/78288b66-9540-48dd-ade3-2803572152aa/volumes" Nov 25 15:15:39 crc kubenswrapper[4731]: I1125 15:15:39.731382 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7304c631-78cb-4360-97f2-8349fa34b009","Type":"ContainerStarted","Data":"05cd7eda3ff5e51160f6a02db5c43d196b067eb89d2cd5e90e26981487528dff"} Nov 25 15:15:39 crc kubenswrapper[4731]: I1125 15:15:39.731690 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7304c631-78cb-4360-97f2-8349fa34b009","Type":"ContainerStarted","Data":"bc90fceed26a0c68ce4b7cef0567b21f76055b612d1e81a99dfba1b4e90f67e3"} Nov 25 15:15:40 crc kubenswrapper[4731]: I1125 15:15:40.741844 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7304c631-78cb-4360-97f2-8349fa34b009","Type":"ContainerStarted","Data":"bc38083c1aaca059e2c6fc1fed6384d549f8f72895bc32f7a41d3fd8171f5824"} Nov 25 15:15:41 crc kubenswrapper[4731]: I1125 15:15:41.754203 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7304c631-78cb-4360-97f2-8349fa34b009","Type":"ContainerStarted","Data":"5a5799456176bf02acdbcb7b16193653a47bb25e29ed5d522378c807120e1361"} Nov 25 15:15:41 crc kubenswrapper[4731]: I1125 15:15:41.975871 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:44 crc kubenswrapper[4731]: I1125 15:15:44.781779 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7304c631-78cb-4360-97f2-8349fa34b009","Type":"ContainerStarted","Data":"df7c287ff7357a9684f98a9a025550c236ce2b92e349b9e3c7bfbcbebd144779"} Nov 25 15:15:44 crc kubenswrapper[4731]: I1125 15:15:44.782405 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:15:44 crc kubenswrapper[4731]: I1125 15:15:44.782091 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7304c631-78cb-4360-97f2-8349fa34b009" containerName="sg-core" containerID="cri-o://5a5799456176bf02acdbcb7b16193653a47bb25e29ed5d522378c807120e1361" gracePeriod=30 Nov 25 15:15:44 crc kubenswrapper[4731]: I1125 15:15:44.781948 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7304c631-78cb-4360-97f2-8349fa34b009" containerName="ceilometer-central-agent" containerID="cri-o://05cd7eda3ff5e51160f6a02db5c43d196b067eb89d2cd5e90e26981487528dff" gracePeriod=30 Nov 25 15:15:44 crc kubenswrapper[4731]: I1125 15:15:44.782111 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7304c631-78cb-4360-97f2-8349fa34b009" containerName="proxy-httpd" containerID="cri-o://df7c287ff7357a9684f98a9a025550c236ce2b92e349b9e3c7bfbcbebd144779" gracePeriod=30 Nov 25 15:15:44 crc kubenswrapper[4731]: I1125 15:15:44.782123 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7304c631-78cb-4360-97f2-8349fa34b009" containerName="ceilometer-notification-agent" containerID="cri-o://bc38083c1aaca059e2c6fc1fed6384d549f8f72895bc32f7a41d3fd8171f5824" gracePeriod=30 Nov 25 15:15:44 crc kubenswrapper[4731]: I1125 15:15:44.840571 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.937307064 podStartE2EDuration="7.840544749s" podCreationTimestamp="2025-11-25 15:15:37 +0000 UTC" firstStartedPulling="2025-11-25 15:15:38.749448622 +0000 UTC m=+1297.496583052" lastFinishedPulling="2025-11-25 15:15:43.652686307 +0000 UTC m=+1302.399820737" observedRunningTime="2025-11-25 15:15:44.83266672 +0000 UTC m=+1303.579801150" watchObservedRunningTime="2025-11-25 15:15:44.840544749 +0000 UTC m=+1303.587679179" Nov 25 15:15:45 crc kubenswrapper[4731]: I1125 15:15:45.794497 4731 generic.go:334] "Generic (PLEG): container finished" podID="7304c631-78cb-4360-97f2-8349fa34b009" containerID="df7c287ff7357a9684f98a9a025550c236ce2b92e349b9e3c7bfbcbebd144779" exitCode=0 Nov 25 15:15:45 crc kubenswrapper[4731]: I1125 15:15:45.794902 4731 generic.go:334] "Generic (PLEG): container finished" podID="7304c631-78cb-4360-97f2-8349fa34b009" containerID="5a5799456176bf02acdbcb7b16193653a47bb25e29ed5d522378c807120e1361" exitCode=2 Nov 25 15:15:45 crc kubenswrapper[4731]: I1125 15:15:45.794917 4731 generic.go:334] "Generic (PLEG): container finished" podID="7304c631-78cb-4360-97f2-8349fa34b009" containerID="bc38083c1aaca059e2c6fc1fed6384d549f8f72895bc32f7a41d3fd8171f5824" exitCode=0 Nov 25 15:15:45 crc kubenswrapper[4731]: I1125 15:15:45.794713 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7304c631-78cb-4360-97f2-8349fa34b009","Type":"ContainerDied","Data":"df7c287ff7357a9684f98a9a025550c236ce2b92e349b9e3c7bfbcbebd144779"} Nov 25 15:15:45 crc kubenswrapper[4731]: I1125 15:15:45.794956 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7304c631-78cb-4360-97f2-8349fa34b009","Type":"ContainerDied","Data":"5a5799456176bf02acdbcb7b16193653a47bb25e29ed5d522378c807120e1361"} Nov 25 15:15:45 crc kubenswrapper[4731]: I1125 15:15:45.794971 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7304c631-78cb-4360-97f2-8349fa34b009","Type":"ContainerDied","Data":"bc38083c1aaca059e2c6fc1fed6384d549f8f72895bc32f7a41d3fd8171f5824"} Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.546248 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.709678 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnrg2\" (UniqueName: \"kubernetes.io/projected/7304c631-78cb-4360-97f2-8349fa34b009-kube-api-access-qnrg2\") pod \"7304c631-78cb-4360-97f2-8349fa34b009\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.709787 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7304c631-78cb-4360-97f2-8349fa34b009-log-httpd\") pod \"7304c631-78cb-4360-97f2-8349fa34b009\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.709908 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-config-data\") pod \"7304c631-78cb-4360-97f2-8349fa34b009\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.709945 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7304c631-78cb-4360-97f2-8349fa34b009-run-httpd\") pod \"7304c631-78cb-4360-97f2-8349fa34b009\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.709960 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-sg-core-conf-yaml\") pod \"7304c631-78cb-4360-97f2-8349fa34b009\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.710046 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-combined-ca-bundle\") pod \"7304c631-78cb-4360-97f2-8349fa34b009\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.710070 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-scripts\") pod \"7304c631-78cb-4360-97f2-8349fa34b009\" (UID: \"7304c631-78cb-4360-97f2-8349fa34b009\") " Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.710785 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7304c631-78cb-4360-97f2-8349fa34b009-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7304c631-78cb-4360-97f2-8349fa34b009" (UID: "7304c631-78cb-4360-97f2-8349fa34b009"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.711232 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7304c631-78cb-4360-97f2-8349fa34b009-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7304c631-78cb-4360-97f2-8349fa34b009" (UID: "7304c631-78cb-4360-97f2-8349fa34b009"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.715789 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7304c631-78cb-4360-97f2-8349fa34b009-kube-api-access-qnrg2" (OuterVolumeSpecName: "kube-api-access-qnrg2") pod "7304c631-78cb-4360-97f2-8349fa34b009" (UID: "7304c631-78cb-4360-97f2-8349fa34b009"). InnerVolumeSpecName "kube-api-access-qnrg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.728290 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-scripts" (OuterVolumeSpecName: "scripts") pod "7304c631-78cb-4360-97f2-8349fa34b009" (UID: "7304c631-78cb-4360-97f2-8349fa34b009"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.751493 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7304c631-78cb-4360-97f2-8349fa34b009" (UID: "7304c631-78cb-4360-97f2-8349fa34b009"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.800667 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7304c631-78cb-4360-97f2-8349fa34b009" (UID: "7304c631-78cb-4360-97f2-8349fa34b009"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.811755 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.811790 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.811804 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnrg2\" (UniqueName: \"kubernetes.io/projected/7304c631-78cb-4360-97f2-8349fa34b009-kube-api-access-qnrg2\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.811842 4731 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7304c631-78cb-4360-97f2-8349fa34b009-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.811854 4731 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7304c631-78cb-4360-97f2-8349fa34b009-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.811867 4731 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.814401 4731 generic.go:334] "Generic (PLEG): container finished" podID="7304c631-78cb-4360-97f2-8349fa34b009" containerID="05cd7eda3ff5e51160f6a02db5c43d196b067eb89d2cd5e90e26981487528dff" exitCode=0 Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.814442 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7304c631-78cb-4360-97f2-8349fa34b009","Type":"ContainerDied","Data":"05cd7eda3ff5e51160f6a02db5c43d196b067eb89d2cd5e90e26981487528dff"} Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.814467 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7304c631-78cb-4360-97f2-8349fa34b009","Type":"ContainerDied","Data":"bc90fceed26a0c68ce4b7cef0567b21f76055b612d1e81a99dfba1b4e90f67e3"} Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.814484 4731 scope.go:117] "RemoveContainer" containerID="df7c287ff7357a9684f98a9a025550c236ce2b92e349b9e3c7bfbcbebd144779" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.814633 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.835610 4731 scope.go:117] "RemoveContainer" containerID="5a5799456176bf02acdbcb7b16193653a47bb25e29ed5d522378c807120e1361" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.854983 4731 scope.go:117] "RemoveContainer" containerID="bc38083c1aaca059e2c6fc1fed6384d549f8f72895bc32f7a41d3fd8171f5824" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.857334 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-config-data" (OuterVolumeSpecName: "config-data") pod "7304c631-78cb-4360-97f2-8349fa34b009" (UID: "7304c631-78cb-4360-97f2-8349fa34b009"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.874403 4731 scope.go:117] "RemoveContainer" containerID="05cd7eda3ff5e51160f6a02db5c43d196b067eb89d2cd5e90e26981487528dff" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.895490 4731 scope.go:117] "RemoveContainer" containerID="df7c287ff7357a9684f98a9a025550c236ce2b92e349b9e3c7bfbcbebd144779" Nov 25 15:15:47 crc kubenswrapper[4731]: E1125 15:15:47.896002 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df7c287ff7357a9684f98a9a025550c236ce2b92e349b9e3c7bfbcbebd144779\": container with ID starting with df7c287ff7357a9684f98a9a025550c236ce2b92e349b9e3c7bfbcbebd144779 not found: ID does not exist" containerID="df7c287ff7357a9684f98a9a025550c236ce2b92e349b9e3c7bfbcbebd144779" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.896030 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df7c287ff7357a9684f98a9a025550c236ce2b92e349b9e3c7bfbcbebd144779"} err="failed to get container status \"df7c287ff7357a9684f98a9a025550c236ce2b92e349b9e3c7bfbcbebd144779\": rpc error: code = NotFound desc = could not find container \"df7c287ff7357a9684f98a9a025550c236ce2b92e349b9e3c7bfbcbebd144779\": container with ID starting with df7c287ff7357a9684f98a9a025550c236ce2b92e349b9e3c7bfbcbebd144779 not found: ID does not exist" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.896054 4731 scope.go:117] "RemoveContainer" containerID="5a5799456176bf02acdbcb7b16193653a47bb25e29ed5d522378c807120e1361" Nov 25 15:15:47 crc kubenswrapper[4731]: E1125 15:15:47.896500 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a5799456176bf02acdbcb7b16193653a47bb25e29ed5d522378c807120e1361\": container with ID starting with 5a5799456176bf02acdbcb7b16193653a47bb25e29ed5d522378c807120e1361 not found: ID does not exist" containerID="5a5799456176bf02acdbcb7b16193653a47bb25e29ed5d522378c807120e1361" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.896596 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a5799456176bf02acdbcb7b16193653a47bb25e29ed5d522378c807120e1361"} err="failed to get container status \"5a5799456176bf02acdbcb7b16193653a47bb25e29ed5d522378c807120e1361\": rpc error: code = NotFound desc = could not find container \"5a5799456176bf02acdbcb7b16193653a47bb25e29ed5d522378c807120e1361\": container with ID starting with 5a5799456176bf02acdbcb7b16193653a47bb25e29ed5d522378c807120e1361 not found: ID does not exist" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.896681 4731 scope.go:117] "RemoveContainer" containerID="bc38083c1aaca059e2c6fc1fed6384d549f8f72895bc32f7a41d3fd8171f5824" Nov 25 15:15:47 crc kubenswrapper[4731]: E1125 15:15:47.897129 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc38083c1aaca059e2c6fc1fed6384d549f8f72895bc32f7a41d3fd8171f5824\": container with ID starting with bc38083c1aaca059e2c6fc1fed6384d549f8f72895bc32f7a41d3fd8171f5824 not found: ID does not exist" containerID="bc38083c1aaca059e2c6fc1fed6384d549f8f72895bc32f7a41d3fd8171f5824" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.897174 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc38083c1aaca059e2c6fc1fed6384d549f8f72895bc32f7a41d3fd8171f5824"} err="failed to get container status \"bc38083c1aaca059e2c6fc1fed6384d549f8f72895bc32f7a41d3fd8171f5824\": rpc error: code = NotFound desc = could not find container \"bc38083c1aaca059e2c6fc1fed6384d549f8f72895bc32f7a41d3fd8171f5824\": container with ID starting with bc38083c1aaca059e2c6fc1fed6384d549f8f72895bc32f7a41d3fd8171f5824 not found: ID does not exist" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.897205 4731 scope.go:117] "RemoveContainer" containerID="05cd7eda3ff5e51160f6a02db5c43d196b067eb89d2cd5e90e26981487528dff" Nov 25 15:15:47 crc kubenswrapper[4731]: E1125 15:15:47.897469 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05cd7eda3ff5e51160f6a02db5c43d196b067eb89d2cd5e90e26981487528dff\": container with ID starting with 05cd7eda3ff5e51160f6a02db5c43d196b067eb89d2cd5e90e26981487528dff not found: ID does not exist" containerID="05cd7eda3ff5e51160f6a02db5c43d196b067eb89d2cd5e90e26981487528dff" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.897512 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05cd7eda3ff5e51160f6a02db5c43d196b067eb89d2cd5e90e26981487528dff"} err="failed to get container status \"05cd7eda3ff5e51160f6a02db5c43d196b067eb89d2cd5e90e26981487528dff\": rpc error: code = NotFound desc = could not find container \"05cd7eda3ff5e51160f6a02db5c43d196b067eb89d2cd5e90e26981487528dff\": container with ID starting with 05cd7eda3ff5e51160f6a02db5c43d196b067eb89d2cd5e90e26981487528dff not found: ID does not exist" Nov 25 15:15:47 crc kubenswrapper[4731]: I1125 15:15:47.913745 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7304c631-78cb-4360-97f2-8349fa34b009-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.198844 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.212474 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.242876 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:48 crc kubenswrapper[4731]: E1125 15:15:48.243491 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7304c631-78cb-4360-97f2-8349fa34b009" containerName="proxy-httpd" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.243619 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7304c631-78cb-4360-97f2-8349fa34b009" containerName="proxy-httpd" Nov 25 15:15:48 crc kubenswrapper[4731]: E1125 15:15:48.243696 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7304c631-78cb-4360-97f2-8349fa34b009" containerName="sg-core" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.243792 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7304c631-78cb-4360-97f2-8349fa34b009" containerName="sg-core" Nov 25 15:15:48 crc kubenswrapper[4731]: E1125 15:15:48.243893 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7304c631-78cb-4360-97f2-8349fa34b009" containerName="ceilometer-central-agent" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.243948 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7304c631-78cb-4360-97f2-8349fa34b009" containerName="ceilometer-central-agent" Nov 25 15:15:48 crc kubenswrapper[4731]: E1125 15:15:48.244012 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7304c631-78cb-4360-97f2-8349fa34b009" containerName="ceilometer-notification-agent" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.244089 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7304c631-78cb-4360-97f2-8349fa34b009" containerName="ceilometer-notification-agent" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.244448 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7304c631-78cb-4360-97f2-8349fa34b009" containerName="ceilometer-notification-agent" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.244542 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7304c631-78cb-4360-97f2-8349fa34b009" containerName="sg-core" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.244619 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7304c631-78cb-4360-97f2-8349fa34b009" containerName="ceilometer-central-agent" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.244693 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7304c631-78cb-4360-97f2-8349fa34b009" containerName="proxy-httpd" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.246491 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.252489 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.252765 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.258284 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.330984 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-config-data\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.331440 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g5mj\" (UniqueName: \"kubernetes.io/projected/9be31bcf-2c0d-4efd-a5f4-e16028e12753-kube-api-access-5g5mj\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.331517 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.331576 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-scripts\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.331621 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9be31bcf-2c0d-4efd-a5f4-e16028e12753-run-httpd\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.331663 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.331736 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9be31bcf-2c0d-4efd-a5f4-e16028e12753-log-httpd\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.432930 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.432981 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-scripts\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.433013 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9be31bcf-2c0d-4efd-a5f4-e16028e12753-run-httpd\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.433043 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.433085 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9be31bcf-2c0d-4efd-a5f4-e16028e12753-log-httpd\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.433137 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-config-data\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.433162 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g5mj\" (UniqueName: \"kubernetes.io/projected/9be31bcf-2c0d-4efd-a5f4-e16028e12753-kube-api-access-5g5mj\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.433909 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9be31bcf-2c0d-4efd-a5f4-e16028e12753-run-httpd\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.433941 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9be31bcf-2c0d-4efd-a5f4-e16028e12753-log-httpd\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.436735 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.436800 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.436917 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-scripts\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.437835 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-config-data\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.450211 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g5mj\" (UniqueName: \"kubernetes.io/projected/9be31bcf-2c0d-4efd-a5f4-e16028e12753-kube-api-access-5g5mj\") pod \"ceilometer-0\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.578465 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.848016 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7304c631-78cb-4360-97f2-8349fa34b009" path="/var/lib/kubelet/pods/7304c631-78cb-4360-97f2-8349fa34b009/volumes" Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.849363 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6z999" event={"ID":"ceeba86d-0fd6-4a88-85ec-1ed6e079d413","Type":"ContainerStarted","Data":"64549fa048be5631091a79e884073643b1bdffad6e6e55cb928dfe8f2552ad46"} Nov 25 15:15:48 crc kubenswrapper[4731]: I1125 15:15:48.868846 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-6z999" podStartSLOduration=2.195811494 podStartE2EDuration="32.868829355s" podCreationTimestamp="2025-11-25 15:15:16 +0000 UTC" firstStartedPulling="2025-11-25 15:15:17.606588122 +0000 UTC m=+1276.353722552" lastFinishedPulling="2025-11-25 15:15:48.279605983 +0000 UTC m=+1307.026740413" observedRunningTime="2025-11-25 15:15:48.863521351 +0000 UTC m=+1307.610655781" watchObservedRunningTime="2025-11-25 15:15:48.868829355 +0000 UTC m=+1307.615963785" Nov 25 15:15:49 crc kubenswrapper[4731]: I1125 15:15:49.112894 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:49 crc kubenswrapper[4731]: I1125 15:15:49.860618 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9be31bcf-2c0d-4efd-a5f4-e16028e12753","Type":"ContainerStarted","Data":"4c937805904ede6ecfb4189de5c2ce9f4f5cf0f2a0378c530d0217dc0af73456"} Nov 25 15:15:50 crc kubenswrapper[4731]: I1125 15:15:50.873600 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9be31bcf-2c0d-4efd-a5f4-e16028e12753","Type":"ContainerStarted","Data":"da6a8e76104bcfb0d2785751b5c2092e713c5d4b5057888c9fcb2ce77f6d251e"} Nov 25 15:15:50 crc kubenswrapper[4731]: I1125 15:15:50.873902 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9be31bcf-2c0d-4efd-a5f4-e16028e12753","Type":"ContainerStarted","Data":"7956de508fbe4f304633c7834c1a50281059a7ece19714350043095b36dff842"} Nov 25 15:15:51 crc kubenswrapper[4731]: I1125 15:15:51.885443 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9be31bcf-2c0d-4efd-a5f4-e16028e12753","Type":"ContainerStarted","Data":"e839727bea9e739efa9b20862041e09edb7f19cfbb0ad918d7a152a2f500f916"} Nov 25 15:15:52 crc kubenswrapper[4731]: I1125 15:15:52.896760 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9be31bcf-2c0d-4efd-a5f4-e16028e12753","Type":"ContainerStarted","Data":"bcd322ca743b97ab59774e799467c3d52b79075c9badb3497875fd3485fde50e"} Nov 25 15:15:52 crc kubenswrapper[4731]: I1125 15:15:52.897349 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:15:52 crc kubenswrapper[4731]: I1125 15:15:52.938665 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.54501612 podStartE2EDuration="4.938644939s" podCreationTimestamp="2025-11-25 15:15:48 +0000 UTC" firstStartedPulling="2025-11-25 15:15:49.11642783 +0000 UTC m=+1307.863562260" lastFinishedPulling="2025-11-25 15:15:52.510056649 +0000 UTC m=+1311.257191079" observedRunningTime="2025-11-25 15:15:52.927604481 +0000 UTC m=+1311.674738921" watchObservedRunningTime="2025-11-25 15:15:52.938644939 +0000 UTC m=+1311.685779369" Nov 25 15:15:59 crc kubenswrapper[4731]: I1125 15:15:59.315065 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:15:59 crc kubenswrapper[4731]: I1125 15:15:59.315851 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerName="ceilometer-central-agent" containerID="cri-o://7956de508fbe4f304633c7834c1a50281059a7ece19714350043095b36dff842" gracePeriod=30 Nov 25 15:15:59 crc kubenswrapper[4731]: I1125 15:15:59.316257 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerName="proxy-httpd" containerID="cri-o://bcd322ca743b97ab59774e799467c3d52b79075c9badb3497875fd3485fde50e" gracePeriod=30 Nov 25 15:15:59 crc kubenswrapper[4731]: I1125 15:15:59.316297 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerName="sg-core" containerID="cri-o://e839727bea9e739efa9b20862041e09edb7f19cfbb0ad918d7a152a2f500f916" gracePeriod=30 Nov 25 15:15:59 crc kubenswrapper[4731]: I1125 15:15:59.316326 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerName="ceilometer-notification-agent" containerID="cri-o://da6a8e76104bcfb0d2785751b5c2092e713c5d4b5057888c9fcb2ce77f6d251e" gracePeriod=30 Nov 25 15:15:59 crc kubenswrapper[4731]: I1125 15:15:59.966357 4731 generic.go:334] "Generic (PLEG): container finished" podID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerID="bcd322ca743b97ab59774e799467c3d52b79075c9badb3497875fd3485fde50e" exitCode=0 Nov 25 15:15:59 crc kubenswrapper[4731]: I1125 15:15:59.966389 4731 generic.go:334] "Generic (PLEG): container finished" podID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerID="e839727bea9e739efa9b20862041e09edb7f19cfbb0ad918d7a152a2f500f916" exitCode=2 Nov 25 15:15:59 crc kubenswrapper[4731]: I1125 15:15:59.966399 4731 generic.go:334] "Generic (PLEG): container finished" podID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerID="da6a8e76104bcfb0d2785751b5c2092e713c5d4b5057888c9fcb2ce77f6d251e" exitCode=0 Nov 25 15:15:59 crc kubenswrapper[4731]: I1125 15:15:59.966409 4731 generic.go:334] "Generic (PLEG): container finished" podID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerID="7956de508fbe4f304633c7834c1a50281059a7ece19714350043095b36dff842" exitCode=0 Nov 25 15:15:59 crc kubenswrapper[4731]: I1125 15:15:59.966428 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9be31bcf-2c0d-4efd-a5f4-e16028e12753","Type":"ContainerDied","Data":"bcd322ca743b97ab59774e799467c3d52b79075c9badb3497875fd3485fde50e"} Nov 25 15:15:59 crc kubenswrapper[4731]: I1125 15:15:59.966453 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9be31bcf-2c0d-4efd-a5f4-e16028e12753","Type":"ContainerDied","Data":"e839727bea9e739efa9b20862041e09edb7f19cfbb0ad918d7a152a2f500f916"} Nov 25 15:15:59 crc kubenswrapper[4731]: I1125 15:15:59.966463 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9be31bcf-2c0d-4efd-a5f4-e16028e12753","Type":"ContainerDied","Data":"da6a8e76104bcfb0d2785751b5c2092e713c5d4b5057888c9fcb2ce77f6d251e"} Nov 25 15:15:59 crc kubenswrapper[4731]: I1125 15:15:59.966472 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9be31bcf-2c0d-4efd-a5f4-e16028e12753","Type":"ContainerDied","Data":"7956de508fbe4f304633c7834c1a50281059a7ece19714350043095b36dff842"} Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.288843 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.359652 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5g5mj\" (UniqueName: \"kubernetes.io/projected/9be31bcf-2c0d-4efd-a5f4-e16028e12753-kube-api-access-5g5mj\") pod \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.359853 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9be31bcf-2c0d-4efd-a5f4-e16028e12753-run-httpd\") pod \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.359924 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9be31bcf-2c0d-4efd-a5f4-e16028e12753-log-httpd\") pod \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.359958 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-sg-core-conf-yaml\") pod \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.359989 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-config-data\") pod \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.360026 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-combined-ca-bundle\") pod \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.360055 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-scripts\") pod \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\" (UID: \"9be31bcf-2c0d-4efd-a5f4-e16028e12753\") " Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.360378 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9be31bcf-2c0d-4efd-a5f4-e16028e12753-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9be31bcf-2c0d-4efd-a5f4-e16028e12753" (UID: "9be31bcf-2c0d-4efd-a5f4-e16028e12753"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.360427 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9be31bcf-2c0d-4efd-a5f4-e16028e12753-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9be31bcf-2c0d-4efd-a5f4-e16028e12753" (UID: "9be31bcf-2c0d-4efd-a5f4-e16028e12753"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.360555 4731 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9be31bcf-2c0d-4efd-a5f4-e16028e12753-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.367513 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-scripts" (OuterVolumeSpecName: "scripts") pod "9be31bcf-2c0d-4efd-a5f4-e16028e12753" (UID: "9be31bcf-2c0d-4efd-a5f4-e16028e12753"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.370081 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be31bcf-2c0d-4efd-a5f4-e16028e12753-kube-api-access-5g5mj" (OuterVolumeSpecName: "kube-api-access-5g5mj") pod "9be31bcf-2c0d-4efd-a5f4-e16028e12753" (UID: "9be31bcf-2c0d-4efd-a5f4-e16028e12753"). InnerVolumeSpecName "kube-api-access-5g5mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.392309 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9be31bcf-2c0d-4efd-a5f4-e16028e12753" (UID: "9be31bcf-2c0d-4efd-a5f4-e16028e12753"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.442533 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9be31bcf-2c0d-4efd-a5f4-e16028e12753" (UID: "9be31bcf-2c0d-4efd-a5f4-e16028e12753"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.462394 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.462428 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.462438 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5g5mj\" (UniqueName: \"kubernetes.io/projected/9be31bcf-2c0d-4efd-a5f4-e16028e12753-kube-api-access-5g5mj\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.462450 4731 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9be31bcf-2c0d-4efd-a5f4-e16028e12753-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.462458 4731 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.484883 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-config-data" (OuterVolumeSpecName: "config-data") pod "9be31bcf-2c0d-4efd-a5f4-e16028e12753" (UID: "9be31bcf-2c0d-4efd-a5f4-e16028e12753"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.564309 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9be31bcf-2c0d-4efd-a5f4-e16028e12753-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.976785 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9be31bcf-2c0d-4efd-a5f4-e16028e12753","Type":"ContainerDied","Data":"4c937805904ede6ecfb4189de5c2ce9f4f5cf0f2a0378c530d0217dc0af73456"} Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.976891 4731 scope.go:117] "RemoveContainer" containerID="bcd322ca743b97ab59774e799467c3d52b79075c9badb3497875fd3485fde50e" Nov 25 15:16:00 crc kubenswrapper[4731]: I1125 15:16:00.976902 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.008674 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.010269 4731 scope.go:117] "RemoveContainer" containerID="e839727bea9e739efa9b20862041e09edb7f19cfbb0ad918d7a152a2f500f916" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.018855 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.053208 4731 scope.go:117] "RemoveContainer" containerID="da6a8e76104bcfb0d2785751b5c2092e713c5d4b5057888c9fcb2ce77f6d251e" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.078768 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:01 crc kubenswrapper[4731]: E1125 15:16:01.079452 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerName="sg-core" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.079476 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerName="sg-core" Nov 25 15:16:01 crc kubenswrapper[4731]: E1125 15:16:01.079510 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerName="ceilometer-central-agent" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.079520 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerName="ceilometer-central-agent" Nov 25 15:16:01 crc kubenswrapper[4731]: E1125 15:16:01.079538 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerName="ceilometer-notification-agent" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.079547 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerName="ceilometer-notification-agent" Nov 25 15:16:01 crc kubenswrapper[4731]: E1125 15:16:01.079565 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerName="proxy-httpd" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.079572 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerName="proxy-httpd" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.079803 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerName="ceilometer-central-agent" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.079839 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerName="ceilometer-notification-agent" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.079869 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerName="proxy-httpd" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.079885 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" containerName="sg-core" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.083277 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.086679 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.088913 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.124402 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.131986 4731 scope.go:117] "RemoveContainer" containerID="7956de508fbe4f304633c7834c1a50281059a7ece19714350043095b36dff842" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.175978 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-config-data\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.176051 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.176076 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.176109 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-scripts\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.176164 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7ac4656-528c-4e44-8b67-f39c0712e4ac-run-httpd\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.176201 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7ac4656-528c-4e44-8b67-f39c0712e4ac-log-httpd\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.176258 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkv9v\" (UniqueName: \"kubernetes.io/projected/d7ac4656-528c-4e44-8b67-f39c0712e4ac-kube-api-access-dkv9v\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.278339 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-scripts\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.278438 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7ac4656-528c-4e44-8b67-f39c0712e4ac-run-httpd\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.278485 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7ac4656-528c-4e44-8b67-f39c0712e4ac-log-httpd\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.278534 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkv9v\" (UniqueName: \"kubernetes.io/projected/d7ac4656-528c-4e44-8b67-f39c0712e4ac-kube-api-access-dkv9v\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.278600 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-config-data\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.278620 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.278646 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.279903 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7ac4656-528c-4e44-8b67-f39c0712e4ac-log-httpd\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.280011 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7ac4656-528c-4e44-8b67-f39c0712e4ac-run-httpd\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.285139 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.285237 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-config-data\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.285768 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-scripts\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.288140 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.302533 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkv9v\" (UniqueName: \"kubernetes.io/projected/d7ac4656-528c-4e44-8b67-f39c0712e4ac-kube-api-access-dkv9v\") pod \"ceilometer-0\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.407196 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.884279 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.885275 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:16:01 crc kubenswrapper[4731]: I1125 15:16:01.991728 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7ac4656-528c-4e44-8b67-f39c0712e4ac","Type":"ContainerStarted","Data":"0a29aefbc1bd44c86d68f3320985d21dcab6ce58487786e96c5818e11a8b1ebf"} Nov 25 15:16:02 crc kubenswrapper[4731]: I1125 15:16:02.823323 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be31bcf-2c0d-4efd-a5f4-e16028e12753" path="/var/lib/kubelet/pods/9be31bcf-2c0d-4efd-a5f4-e16028e12753/volumes" Nov 25 15:16:03 crc kubenswrapper[4731]: I1125 15:16:03.026792 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7ac4656-528c-4e44-8b67-f39c0712e4ac","Type":"ContainerStarted","Data":"29a51530c3a5e44665af7dd1e95a87809e1470285a2ab4515f8d24a3ddd6473c"} Nov 25 15:16:03 crc kubenswrapper[4731]: I1125 15:16:03.029927 4731 generic.go:334] "Generic (PLEG): container finished" podID="ceeba86d-0fd6-4a88-85ec-1ed6e079d413" containerID="64549fa048be5631091a79e884073643b1bdffad6e6e55cb928dfe8f2552ad46" exitCode=0 Nov 25 15:16:03 crc kubenswrapper[4731]: I1125 15:16:03.029980 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6z999" event={"ID":"ceeba86d-0fd6-4a88-85ec-1ed6e079d413","Type":"ContainerDied","Data":"64549fa048be5631091a79e884073643b1bdffad6e6e55cb928dfe8f2552ad46"} Nov 25 15:16:04 crc kubenswrapper[4731]: I1125 15:16:04.039118 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7ac4656-528c-4e44-8b67-f39c0712e4ac","Type":"ContainerStarted","Data":"53840393b2ad88b9de0f278671fa452dcd514d3e9acbc56599937c9a3d3b2a33"} Nov 25 15:16:04 crc kubenswrapper[4731]: I1125 15:16:04.039392 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7ac4656-528c-4e44-8b67-f39c0712e4ac","Type":"ContainerStarted","Data":"4cdc04ee1b82eaf89a9b84615d1a311cc7e5613cf04fc9be9beabbb6bd7dc4b3"} Nov 25 15:16:04 crc kubenswrapper[4731]: I1125 15:16:04.416775 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6z999" Nov 25 15:16:04 crc kubenswrapper[4731]: I1125 15:16:04.460091 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-combined-ca-bundle\") pod \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\" (UID: \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\") " Nov 25 15:16:04 crc kubenswrapper[4731]: I1125 15:16:04.460415 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-config-data\") pod \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\" (UID: \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\") " Nov 25 15:16:04 crc kubenswrapper[4731]: I1125 15:16:04.460570 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-scripts\") pod \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\" (UID: \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\") " Nov 25 15:16:04 crc kubenswrapper[4731]: I1125 15:16:04.462507 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xnh4\" (UniqueName: \"kubernetes.io/projected/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-kube-api-access-6xnh4\") pod \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\" (UID: \"ceeba86d-0fd6-4a88-85ec-1ed6e079d413\") " Nov 25 15:16:04 crc kubenswrapper[4731]: I1125 15:16:04.471221 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-kube-api-access-6xnh4" (OuterVolumeSpecName: "kube-api-access-6xnh4") pod "ceeba86d-0fd6-4a88-85ec-1ed6e079d413" (UID: "ceeba86d-0fd6-4a88-85ec-1ed6e079d413"). InnerVolumeSpecName "kube-api-access-6xnh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:04 crc kubenswrapper[4731]: I1125 15:16:04.472194 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-scripts" (OuterVolumeSpecName: "scripts") pod "ceeba86d-0fd6-4a88-85ec-1ed6e079d413" (UID: "ceeba86d-0fd6-4a88-85ec-1ed6e079d413"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:04 crc kubenswrapper[4731]: I1125 15:16:04.504423 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ceeba86d-0fd6-4a88-85ec-1ed6e079d413" (UID: "ceeba86d-0fd6-4a88-85ec-1ed6e079d413"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:04 crc kubenswrapper[4731]: I1125 15:16:04.506074 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-config-data" (OuterVolumeSpecName: "config-data") pod "ceeba86d-0fd6-4a88-85ec-1ed6e079d413" (UID: "ceeba86d-0fd6-4a88-85ec-1ed6e079d413"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:04 crc kubenswrapper[4731]: I1125 15:16:04.565913 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:04 crc kubenswrapper[4731]: I1125 15:16:04.566359 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xnh4\" (UniqueName: \"kubernetes.io/projected/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-kube-api-access-6xnh4\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:04 crc kubenswrapper[4731]: I1125 15:16:04.566373 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:04 crc kubenswrapper[4731]: I1125 15:16:04.566382 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ceeba86d-0fd6-4a88-85ec-1ed6e079d413-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.047653 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-6z999" event={"ID":"ceeba86d-0fd6-4a88-85ec-1ed6e079d413","Type":"ContainerDied","Data":"e46b24e68051e30d5500aca8c126e80f137ff1e827a3ec33c32cf4874c2dd37d"} Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.047689 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e46b24e68051e30d5500aca8c126e80f137ff1e827a3ec33c32cf4874c2dd37d" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.047757 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-6z999" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.182998 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 15:16:05 crc kubenswrapper[4731]: E1125 15:16:05.183377 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceeba86d-0fd6-4a88-85ec-1ed6e079d413" containerName="nova-cell0-conductor-db-sync" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.183393 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceeba86d-0fd6-4a88-85ec-1ed6e079d413" containerName="nova-cell0-conductor-db-sync" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.183601 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceeba86d-0fd6-4a88-85ec-1ed6e079d413" containerName="nova-cell0-conductor-db-sync" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.184186 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.189941 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-rqk2l" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.193100 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.200799 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.280851 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5sns\" (UniqueName: \"kubernetes.io/projected/5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3-kube-api-access-l5sns\") pod \"nova-cell0-conductor-0\" (UID: \"5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.280957 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.281109 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.383051 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.383111 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5sns\" (UniqueName: \"kubernetes.io/projected/5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3-kube-api-access-l5sns\") pod \"nova-cell0-conductor-0\" (UID: \"5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.383158 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.386547 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.387615 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.401596 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5sns\" (UniqueName: \"kubernetes.io/projected/5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3-kube-api-access-l5sns\") pod \"nova-cell0-conductor-0\" (UID: \"5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3\") " pod="openstack/nova-cell0-conductor-0" Nov 25 15:16:05 crc kubenswrapper[4731]: I1125 15:16:05.516023 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Nov 25 15:16:06 crc kubenswrapper[4731]: I1125 15:16:06.061419 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7ac4656-528c-4e44-8b67-f39c0712e4ac","Type":"ContainerStarted","Data":"655f64c508ddcd302953c83ec0733a1f29a0f9977a1cc74454e7c3075f6c09b5"} Nov 25 15:16:06 crc kubenswrapper[4731]: I1125 15:16:06.061933 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:16:06 crc kubenswrapper[4731]: I1125 15:16:06.105069 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.721884804 podStartE2EDuration="5.105043898s" podCreationTimestamp="2025-11-25 15:16:01 +0000 UTC" firstStartedPulling="2025-11-25 15:16:01.884957544 +0000 UTC m=+1320.632091974" lastFinishedPulling="2025-11-25 15:16:05.268116628 +0000 UTC m=+1324.015251068" observedRunningTime="2025-11-25 15:16:06.083108105 +0000 UTC m=+1324.830242535" watchObservedRunningTime="2025-11-25 15:16:06.105043898 +0000 UTC m=+1324.852178338" Nov 25 15:16:06 crc kubenswrapper[4731]: I1125 15:16:06.346442 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Nov 25 15:16:06 crc kubenswrapper[4731]: W1125 15:16:06.349956 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5cc27bf9_bdd0_43ec_8a78_6b0a8313d8b3.slice/crio-b555fb8c3a4db27ec11e86292a3ed0656c9ef8383de4553be0f0dfb7b0874037 WatchSource:0}: Error finding container b555fb8c3a4db27ec11e86292a3ed0656c9ef8383de4553be0f0dfb7b0874037: Status 404 returned error can't find the container with id b555fb8c3a4db27ec11e86292a3ed0656c9ef8383de4553be0f0dfb7b0874037 Nov 25 15:16:07 crc kubenswrapper[4731]: I1125 15:16:07.072209 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3","Type":"ContainerStarted","Data":"cd15f4911dd986252a7455ddaf18d426bd8d83c2d3c519cbb64210121ed685ba"} Nov 25 15:16:07 crc kubenswrapper[4731]: I1125 15:16:07.072545 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3","Type":"ContainerStarted","Data":"b555fb8c3a4db27ec11e86292a3ed0656c9ef8383de4553be0f0dfb7b0874037"} Nov 25 15:16:07 crc kubenswrapper[4731]: I1125 15:16:07.073072 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.114644 4731 generic.go:334] "Generic (PLEG): container finished" podID="81a541bc-b19f-488c-9d8d-2c48663b4033" containerID="ced4dbbb040f60c7e27942c96447d329de7b016d7725882f1b2ee13ceea24e78" exitCode=137 Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.114781 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" event={"ID":"81a541bc-b19f-488c-9d8d-2c48663b4033","Type":"ContainerDied","Data":"ced4dbbb040f60c7e27942c96447d329de7b016d7725882f1b2ee13ceea24e78"} Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.117032 4731 generic.go:334] "Generic (PLEG): container finished" podID="53a766a8-f8e4-4d2d-8202-e242a2e70d61" containerID="162e430b570dec75d9b8b874e4fd615530a2570d7548f84ed88d21cc5f79e912" exitCode=137 Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.117072 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c9d7769bd-zmnpm" event={"ID":"53a766a8-f8e4-4d2d-8202-e242a2e70d61","Type":"ContainerDied","Data":"162e430b570dec75d9b8b874e4fd615530a2570d7548f84ed88d21cc5f79e912"} Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.438090 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.443076 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.456721 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=6.4566996549999995 podStartE2EDuration="6.456699655s" podCreationTimestamp="2025-11-25 15:16:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:07.096374783 +0000 UTC m=+1325.843509223" watchObservedRunningTime="2025-11-25 15:16:11.456699655 +0000 UTC m=+1330.203834085" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.597518 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8f67\" (UniqueName: \"kubernetes.io/projected/81a541bc-b19f-488c-9d8d-2c48663b4033-kube-api-access-n8f67\") pod \"81a541bc-b19f-488c-9d8d-2c48663b4033\" (UID: \"81a541bc-b19f-488c-9d8d-2c48663b4033\") " Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.597571 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-config-data\") pod \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\" (UID: \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\") " Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.597658 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-config-data-custom\") pod \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\" (UID: \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\") " Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.597684 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-combined-ca-bundle\") pod \"81a541bc-b19f-488c-9d8d-2c48663b4033\" (UID: \"81a541bc-b19f-488c-9d8d-2c48663b4033\") " Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.597745 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-config-data-custom\") pod \"81a541bc-b19f-488c-9d8d-2c48663b4033\" (UID: \"81a541bc-b19f-488c-9d8d-2c48663b4033\") " Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.597786 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8f2t\" (UniqueName: \"kubernetes.io/projected/53a766a8-f8e4-4d2d-8202-e242a2e70d61-kube-api-access-b8f2t\") pod \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\" (UID: \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\") " Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.597954 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-combined-ca-bundle\") pod \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\" (UID: \"53a766a8-f8e4-4d2d-8202-e242a2e70d61\") " Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.597973 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-config-data\") pod \"81a541bc-b19f-488c-9d8d-2c48663b4033\" (UID: \"81a541bc-b19f-488c-9d8d-2c48663b4033\") " Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.603959 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "53a766a8-f8e4-4d2d-8202-e242a2e70d61" (UID: "53a766a8-f8e4-4d2d-8202-e242a2e70d61"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.604360 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81a541bc-b19f-488c-9d8d-2c48663b4033-kube-api-access-n8f67" (OuterVolumeSpecName: "kube-api-access-n8f67") pod "81a541bc-b19f-488c-9d8d-2c48663b4033" (UID: "81a541bc-b19f-488c-9d8d-2c48663b4033"). InnerVolumeSpecName "kube-api-access-n8f67". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.604433 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53a766a8-f8e4-4d2d-8202-e242a2e70d61-kube-api-access-b8f2t" (OuterVolumeSpecName: "kube-api-access-b8f2t") pod "53a766a8-f8e4-4d2d-8202-e242a2e70d61" (UID: "53a766a8-f8e4-4d2d-8202-e242a2e70d61"). InnerVolumeSpecName "kube-api-access-b8f2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.604954 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "81a541bc-b19f-488c-9d8d-2c48663b4033" (UID: "81a541bc-b19f-488c-9d8d-2c48663b4033"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.624216 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53a766a8-f8e4-4d2d-8202-e242a2e70d61" (UID: "53a766a8-f8e4-4d2d-8202-e242a2e70d61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.626251 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81a541bc-b19f-488c-9d8d-2c48663b4033" (UID: "81a541bc-b19f-488c-9d8d-2c48663b4033"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.646992 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-config-data" (OuterVolumeSpecName: "config-data") pod "53a766a8-f8e4-4d2d-8202-e242a2e70d61" (UID: "53a766a8-f8e4-4d2d-8202-e242a2e70d61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.662489 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-config-data" (OuterVolumeSpecName: "config-data") pod "81a541bc-b19f-488c-9d8d-2c48663b4033" (UID: "81a541bc-b19f-488c-9d8d-2c48663b4033"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.699974 4731 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.700008 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.700017 4731 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-config-data-custom\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.700026 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8f2t\" (UniqueName: \"kubernetes.io/projected/53a766a8-f8e4-4d2d-8202-e242a2e70d61-kube-api-access-b8f2t\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.700044 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.700053 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81a541bc-b19f-488c-9d8d-2c48663b4033-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.700061 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8f67\" (UniqueName: \"kubernetes.io/projected/81a541bc-b19f-488c-9d8d-2c48663b4033-kube-api-access-n8f67\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:11 crc kubenswrapper[4731]: I1125 15:16:11.700073 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53a766a8-f8e4-4d2d-8202-e242a2e70d61-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:12 crc kubenswrapper[4731]: I1125 15:16:12.127496 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" event={"ID":"81a541bc-b19f-488c-9d8d-2c48663b4033","Type":"ContainerDied","Data":"fe70a99642978f0cd7b9595ba2ac1fe48b1daeff76df35527f479a904808bd0c"} Nov 25 15:16:12 crc kubenswrapper[4731]: I1125 15:16:12.127514 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-ffdfb7796-4v5j9" Nov 25 15:16:12 crc kubenswrapper[4731]: I1125 15:16:12.127886 4731 scope.go:117] "RemoveContainer" containerID="ced4dbbb040f60c7e27942c96447d329de7b016d7725882f1b2ee13ceea24e78" Nov 25 15:16:12 crc kubenswrapper[4731]: I1125 15:16:12.130263 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-6c9d7769bd-zmnpm" event={"ID":"53a766a8-f8e4-4d2d-8202-e242a2e70d61","Type":"ContainerDied","Data":"1629872ed3402d7ec3d68832a811effb6387e3e20c73996b5bf005075e0da0c9"} Nov 25 15:16:12 crc kubenswrapper[4731]: I1125 15:16:12.130420 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-6c9d7769bd-zmnpm" Nov 25 15:16:12 crc kubenswrapper[4731]: I1125 15:16:12.157758 4731 scope.go:117] "RemoveContainer" containerID="162e430b570dec75d9b8b874e4fd615530a2570d7548f84ed88d21cc5f79e912" Nov 25 15:16:12 crc kubenswrapper[4731]: I1125 15:16:12.183246 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-ffdfb7796-4v5j9"] Nov 25 15:16:12 crc kubenswrapper[4731]: I1125 15:16:12.193381 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-ffdfb7796-4v5j9"] Nov 25 15:16:12 crc kubenswrapper[4731]: I1125 15:16:12.202687 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-6c9d7769bd-zmnpm"] Nov 25 15:16:12 crc kubenswrapper[4731]: I1125 15:16:12.211322 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-6c9d7769bd-zmnpm"] Nov 25 15:16:12 crc kubenswrapper[4731]: I1125 15:16:12.831657 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53a766a8-f8e4-4d2d-8202-e242a2e70d61" path="/var/lib/kubelet/pods/53a766a8-f8e4-4d2d-8202-e242a2e70d61/volumes" Nov 25 15:16:12 crc kubenswrapper[4731]: I1125 15:16:12.833317 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81a541bc-b19f-488c-9d8d-2c48663b4033" path="/var/lib/kubelet/pods/81a541bc-b19f-488c-9d8d-2c48663b4033/volumes" Nov 25 15:16:15 crc kubenswrapper[4731]: I1125 15:16:15.552593 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.182262 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-h5455"] Nov 25 15:16:16 crc kubenswrapper[4731]: E1125 15:16:16.183121 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81a541bc-b19f-488c-9d8d-2c48663b4033" containerName="heat-cfnapi" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.183146 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="81a541bc-b19f-488c-9d8d-2c48663b4033" containerName="heat-cfnapi" Nov 25 15:16:16 crc kubenswrapper[4731]: E1125 15:16:16.183181 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53a766a8-f8e4-4d2d-8202-e242a2e70d61" containerName="heat-api" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.183191 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="53a766a8-f8e4-4d2d-8202-e242a2e70d61" containerName="heat-api" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.183411 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="81a541bc-b19f-488c-9d8d-2c48663b4033" containerName="heat-cfnapi" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.183445 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="53a766a8-f8e4-4d2d-8202-e242a2e70d61" containerName="heat-api" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.184281 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h5455" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.188561 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.188940 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.192905 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-h5455"] Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.363902 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.366251 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.376913 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.388311 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18253943-10e9-4132-8f48-99d3648a7ce0-logs\") pod \"nova-api-0\" (UID: \"18253943-10e9-4132-8f48-99d3648a7ce0\") " pod="openstack/nova-api-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.388416 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-scripts\") pod \"nova-cell0-cell-mapping-h5455\" (UID: \"f671b8ea-21ca-4f84-a181-e98552324ee7\") " pod="openstack/nova-cell0-cell-mapping-h5455" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.388626 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67gh7\" (UniqueName: \"kubernetes.io/projected/f671b8ea-21ca-4f84-a181-e98552324ee7-kube-api-access-67gh7\") pod \"nova-cell0-cell-mapping-h5455\" (UID: \"f671b8ea-21ca-4f84-a181-e98552324ee7\") " pod="openstack/nova-cell0-cell-mapping-h5455" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.388736 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18253943-10e9-4132-8f48-99d3648a7ce0-config-data\") pod \"nova-api-0\" (UID: \"18253943-10e9-4132-8f48-99d3648a7ce0\") " pod="openstack/nova-api-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.388888 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fvfb\" (UniqueName: \"kubernetes.io/projected/18253943-10e9-4132-8f48-99d3648a7ce0-kube-api-access-8fvfb\") pod \"nova-api-0\" (UID: \"18253943-10e9-4132-8f48-99d3648a7ce0\") " pod="openstack/nova-api-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.388924 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18253943-10e9-4132-8f48-99d3648a7ce0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"18253943-10e9-4132-8f48-99d3648a7ce0\") " pod="openstack/nova-api-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.388964 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-config-data\") pod \"nova-cell0-cell-mapping-h5455\" (UID: \"f671b8ea-21ca-4f84-a181-e98552324ee7\") " pod="openstack/nova-cell0-cell-mapping-h5455" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.389034 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h5455\" (UID: \"f671b8ea-21ca-4f84-a181-e98552324ee7\") " pod="openstack/nova-cell0-cell-mapping-h5455" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.392104 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.463090 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.464442 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.468219 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.489619 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.490588 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67gh7\" (UniqueName: \"kubernetes.io/projected/f671b8ea-21ca-4f84-a181-e98552324ee7-kube-api-access-67gh7\") pod \"nova-cell0-cell-mapping-h5455\" (UID: \"f671b8ea-21ca-4f84-a181-e98552324ee7\") " pod="openstack/nova-cell0-cell-mapping-h5455" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.490634 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18253943-10e9-4132-8f48-99d3648a7ce0-config-data\") pod \"nova-api-0\" (UID: \"18253943-10e9-4132-8f48-99d3648a7ce0\") " pod="openstack/nova-api-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.490687 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fvfb\" (UniqueName: \"kubernetes.io/projected/18253943-10e9-4132-8f48-99d3648a7ce0-kube-api-access-8fvfb\") pod \"nova-api-0\" (UID: \"18253943-10e9-4132-8f48-99d3648a7ce0\") " pod="openstack/nova-api-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.490705 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18253943-10e9-4132-8f48-99d3648a7ce0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"18253943-10e9-4132-8f48-99d3648a7ce0\") " pod="openstack/nova-api-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.490731 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-config-data\") pod \"nova-cell0-cell-mapping-h5455\" (UID: \"f671b8ea-21ca-4f84-a181-e98552324ee7\") " pod="openstack/nova-cell0-cell-mapping-h5455" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.490760 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h5455\" (UID: \"f671b8ea-21ca-4f84-a181-e98552324ee7\") " pod="openstack/nova-cell0-cell-mapping-h5455" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.490781 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18253943-10e9-4132-8f48-99d3648a7ce0-logs\") pod \"nova-api-0\" (UID: \"18253943-10e9-4132-8f48-99d3648a7ce0\") " pod="openstack/nova-api-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.490836 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-scripts\") pod \"nova-cell0-cell-mapping-h5455\" (UID: \"f671b8ea-21ca-4f84-a181-e98552324ee7\") " pod="openstack/nova-cell0-cell-mapping-h5455" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.497414 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18253943-10e9-4132-8f48-99d3648a7ce0-logs\") pod \"nova-api-0\" (UID: \"18253943-10e9-4132-8f48-99d3648a7ce0\") " pod="openstack/nova-api-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.510064 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-scripts\") pod \"nova-cell0-cell-mapping-h5455\" (UID: \"f671b8ea-21ca-4f84-a181-e98552324ee7\") " pod="openstack/nova-cell0-cell-mapping-h5455" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.512621 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-h5455\" (UID: \"f671b8ea-21ca-4f84-a181-e98552324ee7\") " pod="openstack/nova-cell0-cell-mapping-h5455" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.513388 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-config-data\") pod \"nova-cell0-cell-mapping-h5455\" (UID: \"f671b8ea-21ca-4f84-a181-e98552324ee7\") " pod="openstack/nova-cell0-cell-mapping-h5455" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.513872 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18253943-10e9-4132-8f48-99d3648a7ce0-config-data\") pod \"nova-api-0\" (UID: \"18253943-10e9-4132-8f48-99d3648a7ce0\") " pod="openstack/nova-api-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.541771 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67gh7\" (UniqueName: \"kubernetes.io/projected/f671b8ea-21ca-4f84-a181-e98552324ee7-kube-api-access-67gh7\") pod \"nova-cell0-cell-mapping-h5455\" (UID: \"f671b8ea-21ca-4f84-a181-e98552324ee7\") " pod="openstack/nova-cell0-cell-mapping-h5455" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.542379 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18253943-10e9-4132-8f48-99d3648a7ce0-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"18253943-10e9-4132-8f48-99d3648a7ce0\") " pod="openstack/nova-api-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.550386 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fvfb\" (UniqueName: \"kubernetes.io/projected/18253943-10e9-4132-8f48-99d3648a7ce0-kube-api-access-8fvfb\") pod \"nova-api-0\" (UID: \"18253943-10e9-4132-8f48-99d3648a7ce0\") " pod="openstack/nova-api-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.592865 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab22e2f-bced-4f8a-9507-25d69bccf58a-config-data\") pod \"nova-scheduler-0\" (UID: \"7ab22e2f-bced-4f8a-9507-25d69bccf58a\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.592952 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab22e2f-bced-4f8a-9507-25d69bccf58a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7ab22e2f-bced-4f8a-9507-25d69bccf58a\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.593027 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h4t4\" (UniqueName: \"kubernetes.io/projected/7ab22e2f-bced-4f8a-9507-25d69bccf58a-kube-api-access-5h4t4\") pod \"nova-scheduler-0\" (UID: \"7ab22e2f-bced-4f8a-9507-25d69bccf58a\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.648913 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.650747 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.680039 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.690340 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.690858 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.695084 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h4t4\" (UniqueName: \"kubernetes.io/projected/7ab22e2f-bced-4f8a-9507-25d69bccf58a-kube-api-access-5h4t4\") pod \"nova-scheduler-0\" (UID: \"7ab22e2f-bced-4f8a-9507-25d69bccf58a\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.695216 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab22e2f-bced-4f8a-9507-25d69bccf58a-config-data\") pod \"nova-scheduler-0\" (UID: \"7ab22e2f-bced-4f8a-9507-25d69bccf58a\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.695305 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab22e2f-bced-4f8a-9507-25d69bccf58a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7ab22e2f-bced-4f8a-9507-25d69bccf58a\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.700794 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.715833 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.717710 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab22e2f-bced-4f8a-9507-25d69bccf58a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7ab22e2f-bced-4f8a-9507-25d69bccf58a\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.726185 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.741415 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab22e2f-bced-4f8a-9507-25d69bccf58a-config-data\") pod \"nova-scheduler-0\" (UID: \"7ab22e2f-bced-4f8a-9507-25d69bccf58a\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.784682 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h4t4\" (UniqueName: \"kubernetes.io/projected/7ab22e2f-bced-4f8a-9507-25d69bccf58a-kube-api-access-5h4t4\") pod \"nova-scheduler-0\" (UID: \"7ab22e2f-bced-4f8a-9507-25d69bccf58a\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.786352 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.819433 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52434f5c-492f-4ee4-866b-f97a38e99fe2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"52434f5c-492f-4ee4-866b-f97a38e99fe2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.819608 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-294kk\" (UniqueName: \"kubernetes.io/projected/52434f5c-492f-4ee4-866b-f97a38e99fe2-kube-api-access-294kk\") pod \"nova-cell1-novncproxy-0\" (UID: \"52434f5c-492f-4ee4-866b-f97a38e99fe2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.819783 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52434f5c-492f-4ee4-866b-f97a38e99fe2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"52434f5c-492f-4ee4-866b-f97a38e99fe2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.820883 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h5455" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.890238 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-xktrd"] Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.905039 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-xktrd"] Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.905158 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.921229 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fspq\" (UniqueName: \"kubernetes.io/projected/d683c68e-92f8-4c33-86dd-bd9a694ca364-kube-api-access-5fspq\") pod \"nova-metadata-0\" (UID: \"d683c68e-92f8-4c33-86dd-bd9a694ca364\") " pod="openstack/nova-metadata-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.921325 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52434f5c-492f-4ee4-866b-f97a38e99fe2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"52434f5c-492f-4ee4-866b-f97a38e99fe2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.921377 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d683c68e-92f8-4c33-86dd-bd9a694ca364-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d683c68e-92f8-4c33-86dd-bd9a694ca364\") " pod="openstack/nova-metadata-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.921403 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52434f5c-492f-4ee4-866b-f97a38e99fe2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"52434f5c-492f-4ee4-866b-f97a38e99fe2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.921465 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d683c68e-92f8-4c33-86dd-bd9a694ca364-logs\") pod \"nova-metadata-0\" (UID: \"d683c68e-92f8-4c33-86dd-bd9a694ca364\") " pod="openstack/nova-metadata-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.921502 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-294kk\" (UniqueName: \"kubernetes.io/projected/52434f5c-492f-4ee4-866b-f97a38e99fe2-kube-api-access-294kk\") pod \"nova-cell1-novncproxy-0\" (UID: \"52434f5c-492f-4ee4-866b-f97a38e99fe2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.927250 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d683c68e-92f8-4c33-86dd-bd9a694ca364-config-data\") pod \"nova-metadata-0\" (UID: \"d683c68e-92f8-4c33-86dd-bd9a694ca364\") " pod="openstack/nova-metadata-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.934919 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52434f5c-492f-4ee4-866b-f97a38e99fe2-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"52434f5c-492f-4ee4-866b-f97a38e99fe2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.937937 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52434f5c-492f-4ee4-866b-f97a38e99fe2-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"52434f5c-492f-4ee4-866b-f97a38e99fe2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:16 crc kubenswrapper[4731]: I1125 15:16:16.953215 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-294kk\" (UniqueName: \"kubernetes.io/projected/52434f5c-492f-4ee4-866b-f97a38e99fe2-kube-api-access-294kk\") pod \"nova-cell1-novncproxy-0\" (UID: \"52434f5c-492f-4ee4-866b-f97a38e99fe2\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.032478 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d683c68e-92f8-4c33-86dd-bd9a694ca364-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d683c68e-92f8-4c33-86dd-bd9a694ca364\") " pod="openstack/nova-metadata-0" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.032552 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.032593 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.032635 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d683c68e-92f8-4c33-86dd-bd9a694ca364-logs\") pod \"nova-metadata-0\" (UID: \"d683c68e-92f8-4c33-86dd-bd9a694ca364\") " pod="openstack/nova-metadata-0" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.032742 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d683c68e-92f8-4c33-86dd-bd9a694ca364-config-data\") pod \"nova-metadata-0\" (UID: \"d683c68e-92f8-4c33-86dd-bd9a694ca364\") " pod="openstack/nova-metadata-0" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.032764 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.032788 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqq8x\" (UniqueName: \"kubernetes.io/projected/6ef75da3-b43f-463c-b511-f1d832a47f0a-kube-api-access-cqq8x\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.032887 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fspq\" (UniqueName: \"kubernetes.io/projected/d683c68e-92f8-4c33-86dd-bd9a694ca364-kube-api-access-5fspq\") pod \"nova-metadata-0\" (UID: \"d683c68e-92f8-4c33-86dd-bd9a694ca364\") " pod="openstack/nova-metadata-0" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.032906 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-config\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.032928 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.037691 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d683c68e-92f8-4c33-86dd-bd9a694ca364-config-data\") pod \"nova-metadata-0\" (UID: \"d683c68e-92f8-4c33-86dd-bd9a694ca364\") " pod="openstack/nova-metadata-0" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.038051 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d683c68e-92f8-4c33-86dd-bd9a694ca364-logs\") pod \"nova-metadata-0\" (UID: \"d683c68e-92f8-4c33-86dd-bd9a694ca364\") " pod="openstack/nova-metadata-0" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.038233 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d683c68e-92f8-4c33-86dd-bd9a694ca364-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d683c68e-92f8-4c33-86dd-bd9a694ca364\") " pod="openstack/nova-metadata-0" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.050520 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.053673 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fspq\" (UniqueName: \"kubernetes.io/projected/d683c68e-92f8-4c33-86dd-bd9a694ca364-kube-api-access-5fspq\") pod \"nova-metadata-0\" (UID: \"d683c68e-92f8-4c33-86dd-bd9a694ca364\") " pod="openstack/nova-metadata-0" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.085977 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.135591 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.135649 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqq8x\" (UniqueName: \"kubernetes.io/projected/6ef75da3-b43f-463c-b511-f1d832a47f0a-kube-api-access-cqq8x\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.135701 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-config\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.135723 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.135781 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.135809 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.136843 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.136972 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.137691 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-config\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.137735 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.138689 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.165646 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqq8x\" (UniqueName: \"kubernetes.io/projected/6ef75da3-b43f-463c-b511-f1d832a47f0a-kube-api-access-cqq8x\") pod \"dnsmasq-dns-568d7fd7cf-xktrd\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.242061 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.263321 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.353198 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nwdc4"] Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.354744 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nwdc4" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.358254 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.358478 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.370424 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nwdc4"] Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.380123 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:16:17 crc kubenswrapper[4731]: W1125 15:16:17.385950 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18253943_10e9_4132_8f48_99d3648a7ce0.slice/crio-a4cafb5d8f166f8955578621cfebdc90eeeb60f241f0609ead07c7545396bafe WatchSource:0}: Error finding container a4cafb5d8f166f8955578621cfebdc90eeeb60f241f0609ead07c7545396bafe: Status 404 returned error can't find the container with id a4cafb5d8f166f8955578621cfebdc90eeeb60f241f0609ead07c7545396bafe Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.447043 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6h22\" (UniqueName: \"kubernetes.io/projected/c717723d-912c-4e6d-add5-f7f1628cb069-kube-api-access-m6h22\") pod \"nova-cell1-conductor-db-sync-nwdc4\" (UID: \"c717723d-912c-4e6d-add5-f7f1628cb069\") " pod="openstack/nova-cell1-conductor-db-sync-nwdc4" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.447113 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nwdc4\" (UID: \"c717723d-912c-4e6d-add5-f7f1628cb069\") " pod="openstack/nova-cell1-conductor-db-sync-nwdc4" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.447207 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-config-data\") pod \"nova-cell1-conductor-db-sync-nwdc4\" (UID: \"c717723d-912c-4e6d-add5-f7f1628cb069\") " pod="openstack/nova-cell1-conductor-db-sync-nwdc4" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.447339 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-scripts\") pod \"nova-cell1-conductor-db-sync-nwdc4\" (UID: \"c717723d-912c-4e6d-add5-f7f1628cb069\") " pod="openstack/nova-cell1-conductor-db-sync-nwdc4" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.554980 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6h22\" (UniqueName: \"kubernetes.io/projected/c717723d-912c-4e6d-add5-f7f1628cb069-kube-api-access-m6h22\") pod \"nova-cell1-conductor-db-sync-nwdc4\" (UID: \"c717723d-912c-4e6d-add5-f7f1628cb069\") " pod="openstack/nova-cell1-conductor-db-sync-nwdc4" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.556076 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nwdc4\" (UID: \"c717723d-912c-4e6d-add5-f7f1628cb069\") " pod="openstack/nova-cell1-conductor-db-sync-nwdc4" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.556169 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-config-data\") pod \"nova-cell1-conductor-db-sync-nwdc4\" (UID: \"c717723d-912c-4e6d-add5-f7f1628cb069\") " pod="openstack/nova-cell1-conductor-db-sync-nwdc4" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.556312 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-scripts\") pod \"nova-cell1-conductor-db-sync-nwdc4\" (UID: \"c717723d-912c-4e6d-add5-f7f1628cb069\") " pod="openstack/nova-cell1-conductor-db-sync-nwdc4" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.563151 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-nwdc4\" (UID: \"c717723d-912c-4e6d-add5-f7f1628cb069\") " pod="openstack/nova-cell1-conductor-db-sync-nwdc4" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.563240 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-config-data\") pod \"nova-cell1-conductor-db-sync-nwdc4\" (UID: \"c717723d-912c-4e6d-add5-f7f1628cb069\") " pod="openstack/nova-cell1-conductor-db-sync-nwdc4" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.566041 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-scripts\") pod \"nova-cell1-conductor-db-sync-nwdc4\" (UID: \"c717723d-912c-4e6d-add5-f7f1628cb069\") " pod="openstack/nova-cell1-conductor-db-sync-nwdc4" Nov 25 15:16:17 crc kubenswrapper[4731]: W1125 15:16:17.571906 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf671b8ea_21ca_4f84_a181_e98552324ee7.slice/crio-cc4df55a47377dd8b4e5ed3770c3bd67f794af4cc21e3aede0d063c2d4253bc3 WatchSource:0}: Error finding container cc4df55a47377dd8b4e5ed3770c3bd67f794af4cc21e3aede0d063c2d4253bc3: Status 404 returned error can't find the container with id cc4df55a47377dd8b4e5ed3770c3bd67f794af4cc21e3aede0d063c2d4253bc3 Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.572107 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-h5455"] Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.578146 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6h22\" (UniqueName: \"kubernetes.io/projected/c717723d-912c-4e6d-add5-f7f1628cb069-kube-api-access-m6h22\") pod \"nova-cell1-conductor-db-sync-nwdc4\" (UID: \"c717723d-912c-4e6d-add5-f7f1628cb069\") " pod="openstack/nova-cell1-conductor-db-sync-nwdc4" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.660856 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:16:17 crc kubenswrapper[4731]: W1125 15:16:17.666753 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52434f5c_492f_4ee4_866b_f97a38e99fe2.slice/crio-5651ba340d97137ed914f266784e5a982bfc266422cc007405999cfc4c453623 WatchSource:0}: Error finding container 5651ba340d97137ed914f266784e5a982bfc266422cc007405999cfc4c453623: Status 404 returned error can't find the container with id 5651ba340d97137ed914f266784e5a982bfc266422cc007405999cfc4c453623 Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.689772 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nwdc4" Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.744670 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.905997 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:16:17 crc kubenswrapper[4731]: W1125 15:16:17.964923 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ef75da3_b43f_463c_b511_f1d832a47f0a.slice/crio-61b2ece0ca8eddc005290f19da4cdb69ede4249dbd11e1461a90aded7177590a WatchSource:0}: Error finding container 61b2ece0ca8eddc005290f19da4cdb69ede4249dbd11e1461a90aded7177590a: Status 404 returned error can't find the container with id 61b2ece0ca8eddc005290f19da4cdb69ede4249dbd11e1461a90aded7177590a Nov 25 15:16:17 crc kubenswrapper[4731]: I1125 15:16:17.970119 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-xktrd"] Nov 25 15:16:18 crc kubenswrapper[4731]: I1125 15:16:18.202749 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nwdc4"] Nov 25 15:16:18 crc kubenswrapper[4731]: W1125 15:16:18.209218 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc717723d_912c_4e6d_add5_f7f1628cb069.slice/crio-275401f4b6f4fc7455f88cf9abb1490afade33cacd3030741c3f950f575ddd94 WatchSource:0}: Error finding container 275401f4b6f4fc7455f88cf9abb1490afade33cacd3030741c3f950f575ddd94: Status 404 returned error can't find the container with id 275401f4b6f4fc7455f88cf9abb1490afade33cacd3030741c3f950f575ddd94 Nov 25 15:16:18 crc kubenswrapper[4731]: I1125 15:16:18.210735 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18253943-10e9-4132-8f48-99d3648a7ce0","Type":"ContainerStarted","Data":"a4cafb5d8f166f8955578621cfebdc90eeeb60f241f0609ead07c7545396bafe"} Nov 25 15:16:18 crc kubenswrapper[4731]: I1125 15:16:18.215076 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" event={"ID":"6ef75da3-b43f-463c-b511-f1d832a47f0a","Type":"ContainerStarted","Data":"61b2ece0ca8eddc005290f19da4cdb69ede4249dbd11e1461a90aded7177590a"} Nov 25 15:16:18 crc kubenswrapper[4731]: I1125 15:16:18.218564 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d683c68e-92f8-4c33-86dd-bd9a694ca364","Type":"ContainerStarted","Data":"384736c3e0ed0e12ec16ff0e73fcea3503ca0b92e138bf65a74e72a3f5eaa0a4"} Nov 25 15:16:18 crc kubenswrapper[4731]: I1125 15:16:18.223842 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7ab22e2f-bced-4f8a-9507-25d69bccf58a","Type":"ContainerStarted","Data":"1d4393048a69fd28f8716903f9ae6fbe42303a3fc1f61a0ce8f3d883016c62a1"} Nov 25 15:16:18 crc kubenswrapper[4731]: I1125 15:16:18.229456 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h5455" event={"ID":"f671b8ea-21ca-4f84-a181-e98552324ee7","Type":"ContainerStarted","Data":"7827d5d55257d5f79e1ad8f9fe3086730c441b0ef766902faeabc9634ed9f89e"} Nov 25 15:16:18 crc kubenswrapper[4731]: I1125 15:16:18.229503 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h5455" event={"ID":"f671b8ea-21ca-4f84-a181-e98552324ee7","Type":"ContainerStarted","Data":"cc4df55a47377dd8b4e5ed3770c3bd67f794af4cc21e3aede0d063c2d4253bc3"} Nov 25 15:16:18 crc kubenswrapper[4731]: I1125 15:16:18.232447 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"52434f5c-492f-4ee4-866b-f97a38e99fe2","Type":"ContainerStarted","Data":"5651ba340d97137ed914f266784e5a982bfc266422cc007405999cfc4c453623"} Nov 25 15:16:18 crc kubenswrapper[4731]: I1125 15:16:18.256499 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-h5455" podStartSLOduration=2.2564781050000002 podStartE2EDuration="2.256478105s" podCreationTimestamp="2025-11-25 15:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:18.255965272 +0000 UTC m=+1337.003099712" watchObservedRunningTime="2025-11-25 15:16:18.256478105 +0000 UTC m=+1337.003612535" Nov 25 15:16:19 crc kubenswrapper[4731]: I1125 15:16:19.280200 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nwdc4" event={"ID":"c717723d-912c-4e6d-add5-f7f1628cb069","Type":"ContainerStarted","Data":"77f20e12ba2b6aedc665350a94f95303a2f00f9c4da605142cabca833bc9658a"} Nov 25 15:16:19 crc kubenswrapper[4731]: I1125 15:16:19.280524 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nwdc4" event={"ID":"c717723d-912c-4e6d-add5-f7f1628cb069","Type":"ContainerStarted","Data":"275401f4b6f4fc7455f88cf9abb1490afade33cacd3030741c3f950f575ddd94"} Nov 25 15:16:19 crc kubenswrapper[4731]: I1125 15:16:19.284079 4731 generic.go:334] "Generic (PLEG): container finished" podID="6ef75da3-b43f-463c-b511-f1d832a47f0a" containerID="6cebcf7ec80d36ee484af21f1e270e5d046240cfb0100cabdb600abd98af82d7" exitCode=0 Nov 25 15:16:19 crc kubenswrapper[4731]: I1125 15:16:19.285190 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" event={"ID":"6ef75da3-b43f-463c-b511-f1d832a47f0a","Type":"ContainerDied","Data":"6cebcf7ec80d36ee484af21f1e270e5d046240cfb0100cabdb600abd98af82d7"} Nov 25 15:16:19 crc kubenswrapper[4731]: I1125 15:16:19.304764 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-nwdc4" podStartSLOduration=2.304741165 podStartE2EDuration="2.304741165s" podCreationTimestamp="2025-11-25 15:16:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:19.297209455 +0000 UTC m=+1338.044343885" watchObservedRunningTime="2025-11-25 15:16:19.304741165 +0000 UTC m=+1338.051875595" Nov 25 15:16:20 crc kubenswrapper[4731]: I1125 15:16:20.039959 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:16:20 crc kubenswrapper[4731]: I1125 15:16:20.040306 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:16:20 crc kubenswrapper[4731]: I1125 15:16:20.447784 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:16:20 crc kubenswrapper[4731]: I1125 15:16:20.470562 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.315283 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d683c68e-92f8-4c33-86dd-bd9a694ca364","Type":"ContainerStarted","Data":"2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f"} Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.315901 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d683c68e-92f8-4c33-86dd-bd9a694ca364","Type":"ContainerStarted","Data":"dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d"} Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.315479 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d683c68e-92f8-4c33-86dd-bd9a694ca364" containerName="nova-metadata-metadata" containerID="cri-o://2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f" gracePeriod=30 Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.315403 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d683c68e-92f8-4c33-86dd-bd9a694ca364" containerName="nova-metadata-log" containerID="cri-o://dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d" gracePeriod=30 Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.318470 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7ab22e2f-bced-4f8a-9507-25d69bccf58a","Type":"ContainerStarted","Data":"035a4acf13b5bfe79752b9460a9a8d1e2b1b4c148cfb7f25bd6a9b3f20707381"} Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.320964 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"52434f5c-492f-4ee4-866b-f97a38e99fe2","Type":"ContainerStarted","Data":"de57e3c2bbcf3355c4b462af2f26ead6a1b0c2696e476db4a1ca419e840c8939"} Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.320973 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="52434f5c-492f-4ee4-866b-f97a38e99fe2" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://de57e3c2bbcf3355c4b462af2f26ead6a1b0c2696e476db4a1ca419e840c8939" gracePeriod=30 Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.323294 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18253943-10e9-4132-8f48-99d3648a7ce0","Type":"ContainerStarted","Data":"cdea132b49e0c75927f89abf6996b34df152d604bd1bd703cfb96050703f0b66"} Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.323327 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18253943-10e9-4132-8f48-99d3648a7ce0","Type":"ContainerStarted","Data":"c4ca79c2fa65938e8215ab1d22cb5ec2a319f120caef3bf92dd1fcfcb7ac0bda"} Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.335238 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.245408419 podStartE2EDuration="6.335215934s" podCreationTimestamp="2025-11-25 15:16:16 +0000 UTC" firstStartedPulling="2025-11-25 15:16:17.91929501 +0000 UTC m=+1336.666429450" lastFinishedPulling="2025-11-25 15:16:21.009102535 +0000 UTC m=+1339.756236965" observedRunningTime="2025-11-25 15:16:22.334168017 +0000 UTC m=+1341.081302467" watchObservedRunningTime="2025-11-25 15:16:22.335215934 +0000 UTC m=+1341.082350374" Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.338248 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" event={"ID":"6ef75da3-b43f-463c-b511-f1d832a47f0a","Type":"ContainerStarted","Data":"f2d2304b88f0461fd1aa060dc3840b65cd6ff7686a998a5060ed94f67db9c6a4"} Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.338458 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.361259 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.113579664 podStartE2EDuration="6.35884115s" podCreationTimestamp="2025-11-25 15:16:16 +0000 UTC" firstStartedPulling="2025-11-25 15:16:17.768011484 +0000 UTC m=+1336.515145914" lastFinishedPulling="2025-11-25 15:16:21.01327297 +0000 UTC m=+1339.760407400" observedRunningTime="2025-11-25 15:16:22.353028293 +0000 UTC m=+1341.100162733" watchObservedRunningTime="2025-11-25 15:16:22.35884115 +0000 UTC m=+1341.105975580" Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.374168 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.754548288 podStartE2EDuration="6.374150546s" podCreationTimestamp="2025-11-25 15:16:16 +0000 UTC" firstStartedPulling="2025-11-25 15:16:17.392943773 +0000 UTC m=+1336.140078213" lastFinishedPulling="2025-11-25 15:16:21.012546051 +0000 UTC m=+1339.759680471" observedRunningTime="2025-11-25 15:16:22.36796685 +0000 UTC m=+1341.115101290" watchObservedRunningTime="2025-11-25 15:16:22.374150546 +0000 UTC m=+1341.121284976" Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.392621 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.050362769 podStartE2EDuration="6.392594801s" podCreationTimestamp="2025-11-25 15:16:16 +0000 UTC" firstStartedPulling="2025-11-25 15:16:17.669344535 +0000 UTC m=+1336.416478965" lastFinishedPulling="2025-11-25 15:16:21.011576567 +0000 UTC m=+1339.758710997" observedRunningTime="2025-11-25 15:16:22.390275693 +0000 UTC m=+1341.137410133" watchObservedRunningTime="2025-11-25 15:16:22.392594801 +0000 UTC m=+1341.139729231" Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.420789 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" podStartSLOduration=6.420766812 podStartE2EDuration="6.420766812s" podCreationTimestamp="2025-11-25 15:16:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:22.414161025 +0000 UTC m=+1341.161295445" watchObservedRunningTime="2025-11-25 15:16:22.420766812 +0000 UTC m=+1341.167901242" Nov 25 15:16:22 crc kubenswrapper[4731]: I1125 15:16:22.942571 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.061192 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d683c68e-92f8-4c33-86dd-bd9a694ca364-config-data\") pod \"d683c68e-92f8-4c33-86dd-bd9a694ca364\" (UID: \"d683c68e-92f8-4c33-86dd-bd9a694ca364\") " Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.061263 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fspq\" (UniqueName: \"kubernetes.io/projected/d683c68e-92f8-4c33-86dd-bd9a694ca364-kube-api-access-5fspq\") pod \"d683c68e-92f8-4c33-86dd-bd9a694ca364\" (UID: \"d683c68e-92f8-4c33-86dd-bd9a694ca364\") " Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.061453 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d683c68e-92f8-4c33-86dd-bd9a694ca364-logs\") pod \"d683c68e-92f8-4c33-86dd-bd9a694ca364\" (UID: \"d683c68e-92f8-4c33-86dd-bd9a694ca364\") " Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.061487 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d683c68e-92f8-4c33-86dd-bd9a694ca364-combined-ca-bundle\") pod \"d683c68e-92f8-4c33-86dd-bd9a694ca364\" (UID: \"d683c68e-92f8-4c33-86dd-bd9a694ca364\") " Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.061974 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d683c68e-92f8-4c33-86dd-bd9a694ca364-logs" (OuterVolumeSpecName: "logs") pod "d683c68e-92f8-4c33-86dd-bd9a694ca364" (UID: "d683c68e-92f8-4c33-86dd-bd9a694ca364"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.062242 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d683c68e-92f8-4c33-86dd-bd9a694ca364-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.067445 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d683c68e-92f8-4c33-86dd-bd9a694ca364-kube-api-access-5fspq" (OuterVolumeSpecName: "kube-api-access-5fspq") pod "d683c68e-92f8-4c33-86dd-bd9a694ca364" (UID: "d683c68e-92f8-4c33-86dd-bd9a694ca364"). InnerVolumeSpecName "kube-api-access-5fspq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.094910 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d683c68e-92f8-4c33-86dd-bd9a694ca364-config-data" (OuterVolumeSpecName: "config-data") pod "d683c68e-92f8-4c33-86dd-bd9a694ca364" (UID: "d683c68e-92f8-4c33-86dd-bd9a694ca364"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.097542 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d683c68e-92f8-4c33-86dd-bd9a694ca364-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d683c68e-92f8-4c33-86dd-bd9a694ca364" (UID: "d683c68e-92f8-4c33-86dd-bd9a694ca364"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.164786 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d683c68e-92f8-4c33-86dd-bd9a694ca364-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.164854 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d683c68e-92f8-4c33-86dd-bd9a694ca364-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.164866 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fspq\" (UniqueName: \"kubernetes.io/projected/d683c68e-92f8-4c33-86dd-bd9a694ca364-kube-api-access-5fspq\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.349268 4731 generic.go:334] "Generic (PLEG): container finished" podID="d683c68e-92f8-4c33-86dd-bd9a694ca364" containerID="2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f" exitCode=0 Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.349308 4731 generic.go:334] "Generic (PLEG): container finished" podID="d683c68e-92f8-4c33-86dd-bd9a694ca364" containerID="dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d" exitCode=143 Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.349437 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d683c68e-92f8-4c33-86dd-bd9a694ca364","Type":"ContainerDied","Data":"2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f"} Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.349474 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d683c68e-92f8-4c33-86dd-bd9a694ca364","Type":"ContainerDied","Data":"dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d"} Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.349488 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d683c68e-92f8-4c33-86dd-bd9a694ca364","Type":"ContainerDied","Data":"384736c3e0ed0e12ec16ff0e73fcea3503ca0b92e138bf65a74e72a3f5eaa0a4"} Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.349507 4731 scope.go:117] "RemoveContainer" containerID="2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.349663 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.378649 4731 scope.go:117] "RemoveContainer" containerID="dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.403527 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.421802 4731 scope.go:117] "RemoveContainer" containerID="2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f" Nov 25 15:16:23 crc kubenswrapper[4731]: E1125 15:16:23.428364 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f\": container with ID starting with 2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f not found: ID does not exist" containerID="2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.428421 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f"} err="failed to get container status \"2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f\": rpc error: code = NotFound desc = could not find container \"2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f\": container with ID starting with 2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f not found: ID does not exist" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.428455 4731 scope.go:117] "RemoveContainer" containerID="dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d" Nov 25 15:16:23 crc kubenswrapper[4731]: E1125 15:16:23.434452 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d\": container with ID starting with dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d not found: ID does not exist" containerID="dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.434506 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d"} err="failed to get container status \"dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d\": rpc error: code = NotFound desc = could not find container \"dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d\": container with ID starting with dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d not found: ID does not exist" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.434543 4731 scope.go:117] "RemoveContainer" containerID="2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.438989 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f"} err="failed to get container status \"2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f\": rpc error: code = NotFound desc = could not find container \"2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f\": container with ID starting with 2fdc4012e578e8fbfa658cbfa770a7ccf0f926a3445af667f830522a8bb2871f not found: ID does not exist" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.439042 4731 scope.go:117] "RemoveContainer" containerID="dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.439241 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.445874 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d"} err="failed to get container status \"dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d\": rpc error: code = NotFound desc = could not find container \"dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d\": container with ID starting with dd5a8725d88b3f9b1d6c70621d41008c8de500142e191144314e17bd3a524d9d not found: ID does not exist" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.450972 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:16:23 crc kubenswrapper[4731]: E1125 15:16:23.451444 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d683c68e-92f8-4c33-86dd-bd9a694ca364" containerName="nova-metadata-log" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.451462 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d683c68e-92f8-4c33-86dd-bd9a694ca364" containerName="nova-metadata-log" Nov 25 15:16:23 crc kubenswrapper[4731]: E1125 15:16:23.451499 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d683c68e-92f8-4c33-86dd-bd9a694ca364" containerName="nova-metadata-metadata" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.451508 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d683c68e-92f8-4c33-86dd-bd9a694ca364" containerName="nova-metadata-metadata" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.451767 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d683c68e-92f8-4c33-86dd-bd9a694ca364" containerName="nova-metadata-log" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.451790 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d683c68e-92f8-4c33-86dd-bd9a694ca364" containerName="nova-metadata-metadata" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.452998 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.459322 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.460028 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.460387 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.576027 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.576610 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jthw\" (UniqueName: \"kubernetes.io/projected/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-kube-api-access-4jthw\") pod \"nova-metadata-0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.576774 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-config-data\") pod \"nova-metadata-0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.577161 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-logs\") pod \"nova-metadata-0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.577365 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.679293 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.679386 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jthw\" (UniqueName: \"kubernetes.io/projected/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-kube-api-access-4jthw\") pod \"nova-metadata-0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.679424 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-config-data\") pod \"nova-metadata-0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.679485 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-logs\") pod \"nova-metadata-0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.679580 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.680413 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-logs\") pod \"nova-metadata-0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.684540 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.684898 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-config-data\") pod \"nova-metadata-0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.695017 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.695453 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jthw\" (UniqueName: \"kubernetes.io/projected/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-kube-api-access-4jthw\") pod \"nova-metadata-0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " pod="openstack/nova-metadata-0" Nov 25 15:16:23 crc kubenswrapper[4731]: I1125 15:16:23.777359 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:16:24 crc kubenswrapper[4731]: I1125 15:16:24.247449 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:16:24 crc kubenswrapper[4731]: W1125 15:16:24.257613 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b7f3eed_89a2_4d7f_8ab6_f05c834665e0.slice/crio-ef91c1b79aeb386fa0dab230764c69c70bcee996ae49562b6d88ca696e8fb3ce WatchSource:0}: Error finding container ef91c1b79aeb386fa0dab230764c69c70bcee996ae49562b6d88ca696e8fb3ce: Status 404 returned error can't find the container with id ef91c1b79aeb386fa0dab230764c69c70bcee996ae49562b6d88ca696e8fb3ce Nov 25 15:16:24 crc kubenswrapper[4731]: I1125 15:16:24.362999 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0","Type":"ContainerStarted","Data":"ef91c1b79aeb386fa0dab230764c69c70bcee996ae49562b6d88ca696e8fb3ce"} Nov 25 15:16:24 crc kubenswrapper[4731]: I1125 15:16:24.824462 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d683c68e-92f8-4c33-86dd-bd9a694ca364" path="/var/lib/kubelet/pods/d683c68e-92f8-4c33-86dd-bd9a694ca364/volumes" Nov 25 15:16:25 crc kubenswrapper[4731]: I1125 15:16:25.374777 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0","Type":"ContainerStarted","Data":"65d418a4fda09f2dac2df562a0be97732bc316081789ec78243d1c9bef457b90"} Nov 25 15:16:25 crc kubenswrapper[4731]: I1125 15:16:25.375052 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0","Type":"ContainerStarted","Data":"d0b6a285a54eea0d4fbd6eb28fbddf5a1b32a33c408c4adb1d9afd248178a07c"} Nov 25 15:16:25 crc kubenswrapper[4731]: I1125 15:16:25.406170 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.406148713 podStartE2EDuration="2.406148713s" podCreationTimestamp="2025-11-25 15:16:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:25.399367882 +0000 UTC m=+1344.146502332" watchObservedRunningTime="2025-11-25 15:16:25.406148713 +0000 UTC m=+1344.153283153" Nov 25 15:16:26 crc kubenswrapper[4731]: I1125 15:16:26.385667 4731 generic.go:334] "Generic (PLEG): container finished" podID="f671b8ea-21ca-4f84-a181-e98552324ee7" containerID="7827d5d55257d5f79e1ad8f9fe3086730c441b0ef766902faeabc9634ed9f89e" exitCode=0 Nov 25 15:16:26 crc kubenswrapper[4731]: I1125 15:16:26.385761 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h5455" event={"ID":"f671b8ea-21ca-4f84-a181-e98552324ee7","Type":"ContainerDied","Data":"7827d5d55257d5f79e1ad8f9fe3086730c441b0ef766902faeabc9634ed9f89e"} Nov 25 15:16:26 crc kubenswrapper[4731]: I1125 15:16:26.692611 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 15:16:26 crc kubenswrapper[4731]: I1125 15:16:26.692736 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.051609 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.086951 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.087397 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.119019 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.264864 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.333163 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-nc8dk"] Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.333468 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" podUID="eafdd6d2-3514-4de8-afc5-651d22a486a4" containerName="dnsmasq-dns" containerID="cri-o://61971286662dde0d3836d01c7e4f06cd147f65bbf23e5c84d1b5b664c1fbc38e" gracePeriod=10 Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.438782 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.775302 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="18253943-10e9-4132-8f48-99d3648a7ce0" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.776136 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="18253943-10e9-4132-8f48-99d3648a7ce0" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.192:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.843840 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h5455" Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.969388 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-scripts\") pod \"f671b8ea-21ca-4f84-a181-e98552324ee7\" (UID: \"f671b8ea-21ca-4f84-a181-e98552324ee7\") " Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.969477 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67gh7\" (UniqueName: \"kubernetes.io/projected/f671b8ea-21ca-4f84-a181-e98552324ee7-kube-api-access-67gh7\") pod \"f671b8ea-21ca-4f84-a181-e98552324ee7\" (UID: \"f671b8ea-21ca-4f84-a181-e98552324ee7\") " Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.969497 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-config-data\") pod \"f671b8ea-21ca-4f84-a181-e98552324ee7\" (UID: \"f671b8ea-21ca-4f84-a181-e98552324ee7\") " Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.969632 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-combined-ca-bundle\") pod \"f671b8ea-21ca-4f84-a181-e98552324ee7\" (UID: \"f671b8ea-21ca-4f84-a181-e98552324ee7\") " Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.975353 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-scripts" (OuterVolumeSpecName: "scripts") pod "f671b8ea-21ca-4f84-a181-e98552324ee7" (UID: "f671b8ea-21ca-4f84-a181-e98552324ee7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.975656 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f671b8ea-21ca-4f84-a181-e98552324ee7-kube-api-access-67gh7" (OuterVolumeSpecName: "kube-api-access-67gh7") pod "f671b8ea-21ca-4f84-a181-e98552324ee7" (UID: "f671b8ea-21ca-4f84-a181-e98552324ee7"). InnerVolumeSpecName "kube-api-access-67gh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:27 crc kubenswrapper[4731]: I1125 15:16:27.990922 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.000379 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f671b8ea-21ca-4f84-a181-e98552324ee7" (UID: "f671b8ea-21ca-4f84-a181-e98552324ee7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.010864 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-config-data" (OuterVolumeSpecName: "config-data") pod "f671b8ea-21ca-4f84-a181-e98552324ee7" (UID: "f671b8ea-21ca-4f84-a181-e98552324ee7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.071647 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fttrx\" (UniqueName: \"kubernetes.io/projected/eafdd6d2-3514-4de8-afc5-651d22a486a4-kube-api-access-fttrx\") pod \"eafdd6d2-3514-4de8-afc5-651d22a486a4\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.072080 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-dns-svc\") pod \"eafdd6d2-3514-4de8-afc5-651d22a486a4\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.072140 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-dns-swift-storage-0\") pod \"eafdd6d2-3514-4de8-afc5-651d22a486a4\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.072283 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-ovsdbserver-nb\") pod \"eafdd6d2-3514-4de8-afc5-651d22a486a4\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.072362 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-config\") pod \"eafdd6d2-3514-4de8-afc5-651d22a486a4\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.072399 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-ovsdbserver-sb\") pod \"eafdd6d2-3514-4de8-afc5-651d22a486a4\" (UID: \"eafdd6d2-3514-4de8-afc5-651d22a486a4\") " Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.073036 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67gh7\" (UniqueName: \"kubernetes.io/projected/f671b8ea-21ca-4f84-a181-e98552324ee7-kube-api-access-67gh7\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.073059 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.073094 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.073543 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f671b8ea-21ca-4f84-a181-e98552324ee7-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.075204 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eafdd6d2-3514-4de8-afc5-651d22a486a4-kube-api-access-fttrx" (OuterVolumeSpecName: "kube-api-access-fttrx") pod "eafdd6d2-3514-4de8-afc5-651d22a486a4" (UID: "eafdd6d2-3514-4de8-afc5-651d22a486a4"). InnerVolumeSpecName "kube-api-access-fttrx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.122895 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eafdd6d2-3514-4de8-afc5-651d22a486a4" (UID: "eafdd6d2-3514-4de8-afc5-651d22a486a4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.130312 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eafdd6d2-3514-4de8-afc5-651d22a486a4" (UID: "eafdd6d2-3514-4de8-afc5-651d22a486a4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.130923 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eafdd6d2-3514-4de8-afc5-651d22a486a4" (UID: "eafdd6d2-3514-4de8-afc5-651d22a486a4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.131368 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-config" (OuterVolumeSpecName: "config") pod "eafdd6d2-3514-4de8-afc5-651d22a486a4" (UID: "eafdd6d2-3514-4de8-afc5-651d22a486a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.138454 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eafdd6d2-3514-4de8-afc5-651d22a486a4" (UID: "eafdd6d2-3514-4de8-afc5-651d22a486a4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.174895 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.174932 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.174949 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fttrx\" (UniqueName: \"kubernetes.io/projected/eafdd6d2-3514-4de8-afc5-651d22a486a4-kube-api-access-fttrx\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.174959 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.174971 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.174982 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eafdd6d2-3514-4de8-afc5-651d22a486a4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.407997 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-h5455" event={"ID":"f671b8ea-21ca-4f84-a181-e98552324ee7","Type":"ContainerDied","Data":"cc4df55a47377dd8b4e5ed3770c3bd67f794af4cc21e3aede0d063c2d4253bc3"} Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.408216 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc4df55a47377dd8b4e5ed3770c3bd67f794af4cc21e3aede0d063c2d4253bc3" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.408270 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-h5455" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.411387 4731 generic.go:334] "Generic (PLEG): container finished" podID="eafdd6d2-3514-4de8-afc5-651d22a486a4" containerID="61971286662dde0d3836d01c7e4f06cd147f65bbf23e5c84d1b5b664c1fbc38e" exitCode=0 Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.411504 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.411541 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" event={"ID":"eafdd6d2-3514-4de8-afc5-651d22a486a4","Type":"ContainerDied","Data":"61971286662dde0d3836d01c7e4f06cd147f65bbf23e5c84d1b5b664c1fbc38e"} Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.411569 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-nc8dk" event={"ID":"eafdd6d2-3514-4de8-afc5-651d22a486a4","Type":"ContainerDied","Data":"59dd08c1733195992d4292c6fccfba09f5b8892caf399df553899cb6b0071797"} Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.411588 4731 scope.go:117] "RemoveContainer" containerID="61971286662dde0d3836d01c7e4f06cd147f65bbf23e5c84d1b5b664c1fbc38e" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.460948 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-nc8dk"] Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.467072 4731 scope.go:117] "RemoveContainer" containerID="668df97d5a3bfb933ecad71ea728e4dc88e710ddecc92a7aaf7101c496aeb7b0" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.471061 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-nc8dk"] Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.551593 4731 scope.go:117] "RemoveContainer" containerID="61971286662dde0d3836d01c7e4f06cd147f65bbf23e5c84d1b5b664c1fbc38e" Nov 25 15:16:28 crc kubenswrapper[4731]: E1125 15:16:28.553194 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61971286662dde0d3836d01c7e4f06cd147f65bbf23e5c84d1b5b664c1fbc38e\": container with ID starting with 61971286662dde0d3836d01c7e4f06cd147f65bbf23e5c84d1b5b664c1fbc38e not found: ID does not exist" containerID="61971286662dde0d3836d01c7e4f06cd147f65bbf23e5c84d1b5b664c1fbc38e" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.553230 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61971286662dde0d3836d01c7e4f06cd147f65bbf23e5c84d1b5b664c1fbc38e"} err="failed to get container status \"61971286662dde0d3836d01c7e4f06cd147f65bbf23e5c84d1b5b664c1fbc38e\": rpc error: code = NotFound desc = could not find container \"61971286662dde0d3836d01c7e4f06cd147f65bbf23e5c84d1b5b664c1fbc38e\": container with ID starting with 61971286662dde0d3836d01c7e4f06cd147f65bbf23e5c84d1b5b664c1fbc38e not found: ID does not exist" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.553255 4731 scope.go:117] "RemoveContainer" containerID="668df97d5a3bfb933ecad71ea728e4dc88e710ddecc92a7aaf7101c496aeb7b0" Nov 25 15:16:28 crc kubenswrapper[4731]: E1125 15:16:28.553639 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"668df97d5a3bfb933ecad71ea728e4dc88e710ddecc92a7aaf7101c496aeb7b0\": container with ID starting with 668df97d5a3bfb933ecad71ea728e4dc88e710ddecc92a7aaf7101c496aeb7b0 not found: ID does not exist" containerID="668df97d5a3bfb933ecad71ea728e4dc88e710ddecc92a7aaf7101c496aeb7b0" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.553670 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"668df97d5a3bfb933ecad71ea728e4dc88e710ddecc92a7aaf7101c496aeb7b0"} err="failed to get container status \"668df97d5a3bfb933ecad71ea728e4dc88e710ddecc92a7aaf7101c496aeb7b0\": rpc error: code = NotFound desc = could not find container \"668df97d5a3bfb933ecad71ea728e4dc88e710ddecc92a7aaf7101c496aeb7b0\": container with ID starting with 668df97d5a3bfb933ecad71ea728e4dc88e710ddecc92a7aaf7101c496aeb7b0 not found: ID does not exist" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.594407 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.594666 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="18253943-10e9-4132-8f48-99d3648a7ce0" containerName="nova-api-log" containerID="cri-o://c4ca79c2fa65938e8215ab1d22cb5ec2a319f120caef3bf92dd1fcfcb7ac0bda" gracePeriod=30 Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.595032 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="18253943-10e9-4132-8f48-99d3648a7ce0" containerName="nova-api-api" containerID="cri-o://cdea132b49e0c75927f89abf6996b34df152d604bd1bd703cfb96050703f0b66" gracePeriod=30 Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.616787 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.626173 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.626400 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3b7f3eed-89a2-4d7f-8ab6-f05c834665e0" containerName="nova-metadata-log" containerID="cri-o://d0b6a285a54eea0d4fbd6eb28fbddf5a1b32a33c408c4adb1d9afd248178a07c" gracePeriod=30 Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.626477 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3b7f3eed-89a2-4d7f-8ab6-f05c834665e0" containerName="nova-metadata-metadata" containerID="cri-o://65d418a4fda09f2dac2df562a0be97732bc316081789ec78243d1c9bef457b90" gracePeriod=30 Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.778393 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.778467 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 15:16:28 crc kubenswrapper[4731]: I1125 15:16:28.826135 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eafdd6d2-3514-4de8-afc5-651d22a486a4" path="/var/lib/kubelet/pods/eafdd6d2-3514-4de8-afc5-651d22a486a4/volumes" Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.445698 4731 generic.go:334] "Generic (PLEG): container finished" podID="3b7f3eed-89a2-4d7f-8ab6-f05c834665e0" containerID="65d418a4fda09f2dac2df562a0be97732bc316081789ec78243d1c9bef457b90" exitCode=0 Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.446089 4731 generic.go:334] "Generic (PLEG): container finished" podID="3b7f3eed-89a2-4d7f-8ab6-f05c834665e0" containerID="d0b6a285a54eea0d4fbd6eb28fbddf5a1b32a33c408c4adb1d9afd248178a07c" exitCode=143 Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.445931 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0","Type":"ContainerDied","Data":"65d418a4fda09f2dac2df562a0be97732bc316081789ec78243d1c9bef457b90"} Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.446704 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0","Type":"ContainerDied","Data":"d0b6a285a54eea0d4fbd6eb28fbddf5a1b32a33c408c4adb1d9afd248178a07c"} Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.451639 4731 generic.go:334] "Generic (PLEG): container finished" podID="18253943-10e9-4132-8f48-99d3648a7ce0" containerID="c4ca79c2fa65938e8215ab1d22cb5ec2a319f120caef3bf92dd1fcfcb7ac0bda" exitCode=143 Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.451754 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18253943-10e9-4132-8f48-99d3648a7ce0","Type":"ContainerDied","Data":"c4ca79c2fa65938e8215ab1d22cb5ec2a319f120caef3bf92dd1fcfcb7ac0bda"} Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.832655 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.907599 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-config-data\") pod \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.907652 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-logs\") pod \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.907743 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jthw\" (UniqueName: \"kubernetes.io/projected/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-kube-api-access-4jthw\") pod \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.907797 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-combined-ca-bundle\") pod \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.907908 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-nova-metadata-tls-certs\") pod \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\" (UID: \"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0\") " Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.908316 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-logs" (OuterVolumeSpecName: "logs") pod "3b7f3eed-89a2-4d7f-8ab6-f05c834665e0" (UID: "3b7f3eed-89a2-4d7f-8ab6-f05c834665e0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.909001 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.926152 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-kube-api-access-4jthw" (OuterVolumeSpecName: "kube-api-access-4jthw") pod "3b7f3eed-89a2-4d7f-8ab6-f05c834665e0" (UID: "3b7f3eed-89a2-4d7f-8ab6-f05c834665e0"). InnerVolumeSpecName "kube-api-access-4jthw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.948175 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3b7f3eed-89a2-4d7f-8ab6-f05c834665e0" (UID: "3b7f3eed-89a2-4d7f-8ab6-f05c834665e0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.950789 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-config-data" (OuterVolumeSpecName: "config-data") pod "3b7f3eed-89a2-4d7f-8ab6-f05c834665e0" (UID: "3b7f3eed-89a2-4d7f-8ab6-f05c834665e0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:29 crc kubenswrapper[4731]: I1125 15:16:29.979434 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3b7f3eed-89a2-4d7f-8ab6-f05c834665e0" (UID: "3b7f3eed-89a2-4d7f-8ab6-f05c834665e0"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.010408 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.010448 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jthw\" (UniqueName: \"kubernetes.io/projected/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-kube-api-access-4jthw\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.010464 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.010476 4731 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.470584 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="7ab22e2f-bced-4f8a-9507-25d69bccf58a" containerName="nova-scheduler-scheduler" containerID="cri-o://035a4acf13b5bfe79752b9460a9a8d1e2b1b4c148cfb7f25bd6a9b3f20707381" gracePeriod=30 Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.470725 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.473794 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3b7f3eed-89a2-4d7f-8ab6-f05c834665e0","Type":"ContainerDied","Data":"ef91c1b79aeb386fa0dab230764c69c70bcee996ae49562b6d88ca696e8fb3ce"} Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.476155 4731 scope.go:117] "RemoveContainer" containerID="65d418a4fda09f2dac2df562a0be97732bc316081789ec78243d1c9bef457b90" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.517378 4731 scope.go:117] "RemoveContainer" containerID="d0b6a285a54eea0d4fbd6eb28fbddf5a1b32a33c408c4adb1d9afd248178a07c" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.531146 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.547528 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.558747 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:16:30 crc kubenswrapper[4731]: E1125 15:16:30.559514 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7f3eed-89a2-4d7f-8ab6-f05c834665e0" containerName="nova-metadata-log" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.559622 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7f3eed-89a2-4d7f-8ab6-f05c834665e0" containerName="nova-metadata-log" Nov 25 15:16:30 crc kubenswrapper[4731]: E1125 15:16:30.559700 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7f3eed-89a2-4d7f-8ab6-f05c834665e0" containerName="nova-metadata-metadata" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.559788 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7f3eed-89a2-4d7f-8ab6-f05c834665e0" containerName="nova-metadata-metadata" Nov 25 15:16:30 crc kubenswrapper[4731]: E1125 15:16:30.559896 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eafdd6d2-3514-4de8-afc5-651d22a486a4" containerName="init" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.559961 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="eafdd6d2-3514-4de8-afc5-651d22a486a4" containerName="init" Nov 25 15:16:30 crc kubenswrapper[4731]: E1125 15:16:30.560015 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f671b8ea-21ca-4f84-a181-e98552324ee7" containerName="nova-manage" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.560064 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f671b8ea-21ca-4f84-a181-e98552324ee7" containerName="nova-manage" Nov 25 15:16:30 crc kubenswrapper[4731]: E1125 15:16:30.560122 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eafdd6d2-3514-4de8-afc5-651d22a486a4" containerName="dnsmasq-dns" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.560174 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="eafdd6d2-3514-4de8-afc5-651d22a486a4" containerName="dnsmasq-dns" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.560391 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f671b8ea-21ca-4f84-a181-e98552324ee7" containerName="nova-manage" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.560458 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b7f3eed-89a2-4d7f-8ab6-f05c834665e0" containerName="nova-metadata-metadata" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.560529 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="eafdd6d2-3514-4de8-afc5-651d22a486a4" containerName="dnsmasq-dns" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.560609 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b7f3eed-89a2-4d7f-8ab6-f05c834665e0" containerName="nova-metadata-log" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.562101 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.565028 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.565297 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.572152 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.622116 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-logs\") pod \"nova-metadata-0\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.622403 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.622526 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95dfw\" (UniqueName: \"kubernetes.io/projected/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-kube-api-access-95dfw\") pod \"nova-metadata-0\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.622765 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-config-data\") pod \"nova-metadata-0\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.623039 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.724419 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95dfw\" (UniqueName: \"kubernetes.io/projected/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-kube-api-access-95dfw\") pod \"nova-metadata-0\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.724513 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-config-data\") pod \"nova-metadata-0\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.724612 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.724743 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-logs\") pod \"nova-metadata-0\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.724792 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.725583 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-logs\") pod \"nova-metadata-0\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.729664 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.730206 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.730859 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-config-data\") pod \"nova-metadata-0\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.740758 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95dfw\" (UniqueName: \"kubernetes.io/projected/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-kube-api-access-95dfw\") pod \"nova-metadata-0\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " pod="openstack/nova-metadata-0" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.827755 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b7f3eed-89a2-4d7f-8ab6-f05c834665e0" path="/var/lib/kubelet/pods/3b7f3eed-89a2-4d7f-8ab6-f05c834665e0/volumes" Nov 25 15:16:30 crc kubenswrapper[4731]: I1125 15:16:30.881233 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:16:31 crc kubenswrapper[4731]: I1125 15:16:31.331327 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:16:31 crc kubenswrapper[4731]: I1125 15:16:31.432466 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 15:16:31 crc kubenswrapper[4731]: I1125 15:16:31.486573 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3bfe044d-13a7-40d5-b612-b95d5d1e39ea","Type":"ContainerStarted","Data":"44ed8e8534c56d2f314d2e19c33ac98281aa8355676e64d0307376c114490825"} Nov 25 15:16:32 crc kubenswrapper[4731]: E1125 15:16:32.095236 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="035a4acf13b5bfe79752b9460a9a8d1e2b1b4c148cfb7f25bd6a9b3f20707381" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 15:16:32 crc kubenswrapper[4731]: E1125 15:16:32.104091 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="035a4acf13b5bfe79752b9460a9a8d1e2b1b4c148cfb7f25bd6a9b3f20707381" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 15:16:32 crc kubenswrapper[4731]: E1125 15:16:32.108268 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="035a4acf13b5bfe79752b9460a9a8d1e2b1b4c148cfb7f25bd6a9b3f20707381" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 15:16:32 crc kubenswrapper[4731]: E1125 15:16:32.108347 4731 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="7ab22e2f-bced-4f8a-9507-25d69bccf58a" containerName="nova-scheduler-scheduler" Nov 25 15:16:32 crc kubenswrapper[4731]: I1125 15:16:32.501072 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3bfe044d-13a7-40d5-b612-b95d5d1e39ea","Type":"ContainerStarted","Data":"6c3cb3b9773d85266db990098c6666aa14be4f987b39067202fc96f0b187e450"} Nov 25 15:16:32 crc kubenswrapper[4731]: I1125 15:16:32.501380 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3bfe044d-13a7-40d5-b612-b95d5d1e39ea","Type":"ContainerStarted","Data":"f272995c2df8210d0df3e9972775067ab6307b04d5ca6511ce3a35771b29f150"} Nov 25 15:16:32 crc kubenswrapper[4731]: I1125 15:16:32.524259 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.524237154 podStartE2EDuration="2.524237154s" podCreationTimestamp="2025-11-25 15:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:32.521233348 +0000 UTC m=+1351.268367778" watchObservedRunningTime="2025-11-25 15:16:32.524237154 +0000 UTC m=+1351.271371594" Nov 25 15:16:33 crc kubenswrapper[4731]: I1125 15:16:33.512671 4731 generic.go:334] "Generic (PLEG): container finished" podID="7ab22e2f-bced-4f8a-9507-25d69bccf58a" containerID="035a4acf13b5bfe79752b9460a9a8d1e2b1b4c148cfb7f25bd6a9b3f20707381" exitCode=0 Nov 25 15:16:33 crc kubenswrapper[4731]: I1125 15:16:33.512747 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7ab22e2f-bced-4f8a-9507-25d69bccf58a","Type":"ContainerDied","Data":"035a4acf13b5bfe79752b9460a9a8d1e2b1b4c148cfb7f25bd6a9b3f20707381"} Nov 25 15:16:33 crc kubenswrapper[4731]: I1125 15:16:33.956423 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.100135 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h4t4\" (UniqueName: \"kubernetes.io/projected/7ab22e2f-bced-4f8a-9507-25d69bccf58a-kube-api-access-5h4t4\") pod \"7ab22e2f-bced-4f8a-9507-25d69bccf58a\" (UID: \"7ab22e2f-bced-4f8a-9507-25d69bccf58a\") " Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.100969 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab22e2f-bced-4f8a-9507-25d69bccf58a-config-data\") pod \"7ab22e2f-bced-4f8a-9507-25d69bccf58a\" (UID: \"7ab22e2f-bced-4f8a-9507-25d69bccf58a\") " Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.101348 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab22e2f-bced-4f8a-9507-25d69bccf58a-combined-ca-bundle\") pod \"7ab22e2f-bced-4f8a-9507-25d69bccf58a\" (UID: \"7ab22e2f-bced-4f8a-9507-25d69bccf58a\") " Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.110640 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ab22e2f-bced-4f8a-9507-25d69bccf58a-kube-api-access-5h4t4" (OuterVolumeSpecName: "kube-api-access-5h4t4") pod "7ab22e2f-bced-4f8a-9507-25d69bccf58a" (UID: "7ab22e2f-bced-4f8a-9507-25d69bccf58a"). InnerVolumeSpecName "kube-api-access-5h4t4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.147946 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab22e2f-bced-4f8a-9507-25d69bccf58a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7ab22e2f-bced-4f8a-9507-25d69bccf58a" (UID: "7ab22e2f-bced-4f8a-9507-25d69bccf58a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.149201 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ab22e2f-bced-4f8a-9507-25d69bccf58a-config-data" (OuterVolumeSpecName: "config-data") pod "7ab22e2f-bced-4f8a-9507-25d69bccf58a" (UID: "7ab22e2f-bced-4f8a-9507-25d69bccf58a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.205437 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ab22e2f-bced-4f8a-9507-25d69bccf58a-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.205484 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ab22e2f-bced-4f8a-9507-25d69bccf58a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.205500 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h4t4\" (UniqueName: \"kubernetes.io/projected/7ab22e2f-bced-4f8a-9507-25d69bccf58a-kube-api-access-5h4t4\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.525749 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7ab22e2f-bced-4f8a-9507-25d69bccf58a","Type":"ContainerDied","Data":"1d4393048a69fd28f8716903f9ae6fbe42303a3fc1f61a0ce8f3d883016c62a1"} Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.525840 4731 scope.go:117] "RemoveContainer" containerID="035a4acf13b5bfe79752b9460a9a8d1e2b1b4c148cfb7f25bd6a9b3f20707381" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.525767 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.529406 4731 generic.go:334] "Generic (PLEG): container finished" podID="18253943-10e9-4132-8f48-99d3648a7ce0" containerID="cdea132b49e0c75927f89abf6996b34df152d604bd1bd703cfb96050703f0b66" exitCode=0 Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.529464 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18253943-10e9-4132-8f48-99d3648a7ce0","Type":"ContainerDied","Data":"cdea132b49e0c75927f89abf6996b34df152d604bd1bd703cfb96050703f0b66"} Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.574014 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dzrd6"] Nov 25 15:16:34 crc kubenswrapper[4731]: E1125 15:16:34.574561 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ab22e2f-bced-4f8a-9507-25d69bccf58a" containerName="nova-scheduler-scheduler" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.574583 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ab22e2f-bced-4f8a-9507-25d69bccf58a" containerName="nova-scheduler-scheduler" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.574844 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ab22e2f-bced-4f8a-9507-25d69bccf58a" containerName="nova-scheduler-scheduler" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.576597 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.586232 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.599128 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.614151 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dzrd6"] Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.645406 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.646864 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.649074 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.684188 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.714810 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs422\" (UniqueName: \"kubernetes.io/projected/9752677e-1420-41db-a27c-96995a7553b8-kube-api-access-xs422\") pod \"redhat-operators-dzrd6\" (UID: \"9752677e-1420-41db-a27c-96995a7553b8\") " pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.714944 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9752677e-1420-41db-a27c-96995a7553b8-catalog-content\") pod \"redhat-operators-dzrd6\" (UID: \"9752677e-1420-41db-a27c-96995a7553b8\") " pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.715101 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9752677e-1420-41db-a27c-96995a7553b8-utilities\") pod \"redhat-operators-dzrd6\" (UID: \"9752677e-1420-41db-a27c-96995a7553b8\") " pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.715201 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc7eee5f-3ccc-4620-bc52-97e768a361b3-config-data\") pod \"nova-scheduler-0\" (UID: \"cc7eee5f-3ccc-4620-bc52-97e768a361b3\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.715318 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94pnn\" (UniqueName: \"kubernetes.io/projected/cc7eee5f-3ccc-4620-bc52-97e768a361b3-kube-api-access-94pnn\") pod \"nova-scheduler-0\" (UID: \"cc7eee5f-3ccc-4620-bc52-97e768a361b3\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.715393 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7eee5f-3ccc-4620-bc52-97e768a361b3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cc7eee5f-3ccc-4620-bc52-97e768a361b3\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.816497 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7eee5f-3ccc-4620-bc52-97e768a361b3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cc7eee5f-3ccc-4620-bc52-97e768a361b3\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.816586 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs422\" (UniqueName: \"kubernetes.io/projected/9752677e-1420-41db-a27c-96995a7553b8-kube-api-access-xs422\") pod \"redhat-operators-dzrd6\" (UID: \"9752677e-1420-41db-a27c-96995a7553b8\") " pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.816678 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9752677e-1420-41db-a27c-96995a7553b8-catalog-content\") pod \"redhat-operators-dzrd6\" (UID: \"9752677e-1420-41db-a27c-96995a7553b8\") " pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.816724 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9752677e-1420-41db-a27c-96995a7553b8-utilities\") pod \"redhat-operators-dzrd6\" (UID: \"9752677e-1420-41db-a27c-96995a7553b8\") " pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.816752 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc7eee5f-3ccc-4620-bc52-97e768a361b3-config-data\") pod \"nova-scheduler-0\" (UID: \"cc7eee5f-3ccc-4620-bc52-97e768a361b3\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.816796 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94pnn\" (UniqueName: \"kubernetes.io/projected/cc7eee5f-3ccc-4620-bc52-97e768a361b3-kube-api-access-94pnn\") pod \"nova-scheduler-0\" (UID: \"cc7eee5f-3ccc-4620-bc52-97e768a361b3\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.817265 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9752677e-1420-41db-a27c-96995a7553b8-utilities\") pod \"redhat-operators-dzrd6\" (UID: \"9752677e-1420-41db-a27c-96995a7553b8\") " pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.817598 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9752677e-1420-41db-a27c-96995a7553b8-catalog-content\") pod \"redhat-operators-dzrd6\" (UID: \"9752677e-1420-41db-a27c-96995a7553b8\") " pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.820254 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7eee5f-3ccc-4620-bc52-97e768a361b3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"cc7eee5f-3ccc-4620-bc52-97e768a361b3\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.820965 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc7eee5f-3ccc-4620-bc52-97e768a361b3-config-data\") pod \"nova-scheduler-0\" (UID: \"cc7eee5f-3ccc-4620-bc52-97e768a361b3\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.824722 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ab22e2f-bced-4f8a-9507-25d69bccf58a" path="/var/lib/kubelet/pods/7ab22e2f-bced-4f8a-9507-25d69bccf58a/volumes" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.840950 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94pnn\" (UniqueName: \"kubernetes.io/projected/cc7eee5f-3ccc-4620-bc52-97e768a361b3-kube-api-access-94pnn\") pod \"nova-scheduler-0\" (UID: \"cc7eee5f-3ccc-4620-bc52-97e768a361b3\") " pod="openstack/nova-scheduler-0" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.846867 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs422\" (UniqueName: \"kubernetes.io/projected/9752677e-1420-41db-a27c-96995a7553b8-kube-api-access-xs422\") pod \"redhat-operators-dzrd6\" (UID: \"9752677e-1420-41db-a27c-96995a7553b8\") " pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.906458 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:16:34 crc kubenswrapper[4731]: I1125 15:16:34.973519 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.078870 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.228281 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18253943-10e9-4132-8f48-99d3648a7ce0-config-data\") pod \"18253943-10e9-4132-8f48-99d3648a7ce0\" (UID: \"18253943-10e9-4132-8f48-99d3648a7ce0\") " Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.228505 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fvfb\" (UniqueName: \"kubernetes.io/projected/18253943-10e9-4132-8f48-99d3648a7ce0-kube-api-access-8fvfb\") pod \"18253943-10e9-4132-8f48-99d3648a7ce0\" (UID: \"18253943-10e9-4132-8f48-99d3648a7ce0\") " Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.228627 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18253943-10e9-4132-8f48-99d3648a7ce0-combined-ca-bundle\") pod \"18253943-10e9-4132-8f48-99d3648a7ce0\" (UID: \"18253943-10e9-4132-8f48-99d3648a7ce0\") " Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.228658 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18253943-10e9-4132-8f48-99d3648a7ce0-logs\") pod \"18253943-10e9-4132-8f48-99d3648a7ce0\" (UID: \"18253943-10e9-4132-8f48-99d3648a7ce0\") " Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.229222 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18253943-10e9-4132-8f48-99d3648a7ce0-logs" (OuterVolumeSpecName: "logs") pod "18253943-10e9-4132-8f48-99d3648a7ce0" (UID: "18253943-10e9-4132-8f48-99d3648a7ce0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.236990 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18253943-10e9-4132-8f48-99d3648a7ce0-kube-api-access-8fvfb" (OuterVolumeSpecName: "kube-api-access-8fvfb") pod "18253943-10e9-4132-8f48-99d3648a7ce0" (UID: "18253943-10e9-4132-8f48-99d3648a7ce0"). InnerVolumeSpecName "kube-api-access-8fvfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.259756 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18253943-10e9-4132-8f48-99d3648a7ce0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18253943-10e9-4132-8f48-99d3648a7ce0" (UID: "18253943-10e9-4132-8f48-99d3648a7ce0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.269399 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18253943-10e9-4132-8f48-99d3648a7ce0-config-data" (OuterVolumeSpecName: "config-data") pod "18253943-10e9-4132-8f48-99d3648a7ce0" (UID: "18253943-10e9-4132-8f48-99d3648a7ce0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.331357 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18253943-10e9-4132-8f48-99d3648a7ce0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.331417 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/18253943-10e9-4132-8f48-99d3648a7ce0-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.331437 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/18253943-10e9-4132-8f48-99d3648a7ce0-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.331449 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fvfb\" (UniqueName: \"kubernetes.io/projected/18253943-10e9-4132-8f48-99d3648a7ce0-kube-api-access-8fvfb\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.413221 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.415146 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="7142ee70-31f5-466a-8772-d4e14f8986ee" containerName="kube-state-metrics" containerID="cri-o://f6c1670ab91512ffec7d9ee0439365a0b2385bf092e65e2f5c56860ec740dbd9" gracePeriod=30 Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.540418 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dzrd6"] Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.573685 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzrd6" event={"ID":"9752677e-1420-41db-a27c-96995a7553b8","Type":"ContainerStarted","Data":"e9b653e05b2ab0a8bfdb080d76ce9c69463de50f5524fe31a0142aa0ac1b1c8c"} Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.586015 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"18253943-10e9-4132-8f48-99d3648a7ce0","Type":"ContainerDied","Data":"a4cafb5d8f166f8955578621cfebdc90eeeb60f241f0609ead07c7545396bafe"} Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.586058 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.586081 4731 scope.go:117] "RemoveContainer" containerID="cdea132b49e0c75927f89abf6996b34df152d604bd1bd703cfb96050703f0b66" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.592327 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.636194 4731 scope.go:117] "RemoveContainer" containerID="c4ca79c2fa65938e8215ab1d22cb5ec2a319f120caef3bf92dd1fcfcb7ac0bda" Nov 25 15:16:35 crc kubenswrapper[4731]: W1125 15:16:35.643528 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc7eee5f_3ccc_4620_bc52_97e768a361b3.slice/crio-37ed2c6ccd14e781471c53f8771e38d8da7fd1b52ceb4051154560c3d783a390 WatchSource:0}: Error finding container 37ed2c6ccd14e781471c53f8771e38d8da7fd1b52ceb4051154560c3d783a390: Status 404 returned error can't find the container with id 37ed2c6ccd14e781471c53f8771e38d8da7fd1b52ceb4051154560c3d783a390 Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.676579 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.691861 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.702604 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 15:16:35 crc kubenswrapper[4731]: E1125 15:16:35.703116 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18253943-10e9-4132-8f48-99d3648a7ce0" containerName="nova-api-api" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.703130 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="18253943-10e9-4132-8f48-99d3648a7ce0" containerName="nova-api-api" Nov 25 15:16:35 crc kubenswrapper[4731]: E1125 15:16:35.703143 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18253943-10e9-4132-8f48-99d3648a7ce0" containerName="nova-api-log" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.703149 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="18253943-10e9-4132-8f48-99d3648a7ce0" containerName="nova-api-log" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.703345 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="18253943-10e9-4132-8f48-99d3648a7ce0" containerName="nova-api-api" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.703375 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="18253943-10e9-4132-8f48-99d3648a7ce0" containerName="nova-api-log" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.704455 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.710624 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.720369 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.841780 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dd41a0-6d32-4e6d-903b-a212c83ba972-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14dd41a0-6d32-4e6d-903b-a212c83ba972\") " pod="openstack/nova-api-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.841937 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2mb2\" (UniqueName: \"kubernetes.io/projected/14dd41a0-6d32-4e6d-903b-a212c83ba972-kube-api-access-x2mb2\") pod \"nova-api-0\" (UID: \"14dd41a0-6d32-4e6d-903b-a212c83ba972\") " pod="openstack/nova-api-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.842118 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14dd41a0-6d32-4e6d-903b-a212c83ba972-logs\") pod \"nova-api-0\" (UID: \"14dd41a0-6d32-4e6d-903b-a212c83ba972\") " pod="openstack/nova-api-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.842475 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14dd41a0-6d32-4e6d-903b-a212c83ba972-config-data\") pod \"nova-api-0\" (UID: \"14dd41a0-6d32-4e6d-903b-a212c83ba972\") " pod="openstack/nova-api-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.882912 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.882967 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.944232 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2mb2\" (UniqueName: \"kubernetes.io/projected/14dd41a0-6d32-4e6d-903b-a212c83ba972-kube-api-access-x2mb2\") pod \"nova-api-0\" (UID: \"14dd41a0-6d32-4e6d-903b-a212c83ba972\") " pod="openstack/nova-api-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.944357 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14dd41a0-6d32-4e6d-903b-a212c83ba972-logs\") pod \"nova-api-0\" (UID: \"14dd41a0-6d32-4e6d-903b-a212c83ba972\") " pod="openstack/nova-api-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.946702 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14dd41a0-6d32-4e6d-903b-a212c83ba972-logs\") pod \"nova-api-0\" (UID: \"14dd41a0-6d32-4e6d-903b-a212c83ba972\") " pod="openstack/nova-api-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.947019 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14dd41a0-6d32-4e6d-903b-a212c83ba972-config-data\") pod \"nova-api-0\" (UID: \"14dd41a0-6d32-4e6d-903b-a212c83ba972\") " pod="openstack/nova-api-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.947101 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dd41a0-6d32-4e6d-903b-a212c83ba972-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14dd41a0-6d32-4e6d-903b-a212c83ba972\") " pod="openstack/nova-api-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.951953 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dd41a0-6d32-4e6d-903b-a212c83ba972-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"14dd41a0-6d32-4e6d-903b-a212c83ba972\") " pod="openstack/nova-api-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.953091 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14dd41a0-6d32-4e6d-903b-a212c83ba972-config-data\") pod \"nova-api-0\" (UID: \"14dd41a0-6d32-4e6d-903b-a212c83ba972\") " pod="openstack/nova-api-0" Nov 25 15:16:35 crc kubenswrapper[4731]: I1125 15:16:35.962440 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2mb2\" (UniqueName: \"kubernetes.io/projected/14dd41a0-6d32-4e6d-903b-a212c83ba972-kube-api-access-x2mb2\") pod \"nova-api-0\" (UID: \"14dd41a0-6d32-4e6d-903b-a212c83ba972\") " pod="openstack/nova-api-0" Nov 25 15:16:36 crc kubenswrapper[4731]: I1125 15:16:36.026240 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:16:36 crc kubenswrapper[4731]: I1125 15:16:36.494947 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:16:36 crc kubenswrapper[4731]: W1125 15:16:36.501289 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14dd41a0_6d32_4e6d_903b_a212c83ba972.slice/crio-7e7f597145b7bb51aeecccb5b6e2a11ecfc321d6d425f2c40a48bad20a8258af WatchSource:0}: Error finding container 7e7f597145b7bb51aeecccb5b6e2a11ecfc321d6d425f2c40a48bad20a8258af: Status 404 returned error can't find the container with id 7e7f597145b7bb51aeecccb5b6e2a11ecfc321d6d425f2c40a48bad20a8258af Nov 25 15:16:36 crc kubenswrapper[4731]: I1125 15:16:36.603911 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14dd41a0-6d32-4e6d-903b-a212c83ba972","Type":"ContainerStarted","Data":"7e7f597145b7bb51aeecccb5b6e2a11ecfc321d6d425f2c40a48bad20a8258af"} Nov 25 15:16:36 crc kubenswrapper[4731]: I1125 15:16:36.605933 4731 generic.go:334] "Generic (PLEG): container finished" podID="7142ee70-31f5-466a-8772-d4e14f8986ee" containerID="f6c1670ab91512ffec7d9ee0439365a0b2385bf092e65e2f5c56860ec740dbd9" exitCode=2 Nov 25 15:16:36 crc kubenswrapper[4731]: I1125 15:16:36.606021 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7142ee70-31f5-466a-8772-d4e14f8986ee","Type":"ContainerDied","Data":"f6c1670ab91512ffec7d9ee0439365a0b2385bf092e65e2f5c56860ec740dbd9"} Nov 25 15:16:36 crc kubenswrapper[4731]: I1125 15:16:36.609716 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cc7eee5f-3ccc-4620-bc52-97e768a361b3","Type":"ContainerStarted","Data":"37ed2c6ccd14e781471c53f8771e38d8da7fd1b52ceb4051154560c3d783a390"} Nov 25 15:16:36 crc kubenswrapper[4731]: I1125 15:16:36.821900 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18253943-10e9-4132-8f48-99d3648a7ce0" path="/var/lib/kubelet/pods/18253943-10e9-4132-8f48-99d3648a7ce0/volumes" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.525360 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.536217 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.536552 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerName="ceilometer-central-agent" containerID="cri-o://29a51530c3a5e44665af7dd1e95a87809e1470285a2ab4515f8d24a3ddd6473c" gracePeriod=30 Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.536722 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerName="proxy-httpd" containerID="cri-o://655f64c508ddcd302953c83ec0733a1f29a0f9977a1cc74454e7c3075f6c09b5" gracePeriod=30 Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.536783 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerName="sg-core" containerID="cri-o://53840393b2ad88b9de0f278671fa452dcd514d3e9acbc56599937c9a3d3b2a33" gracePeriod=30 Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.537448 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerName="ceilometer-notification-agent" containerID="cri-o://4cdc04ee1b82eaf89a9b84615d1a311cc7e5613cf04fc9be9beabbb6bd7dc4b3" gracePeriod=30 Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.578834 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c427c\" (UniqueName: \"kubernetes.io/projected/7142ee70-31f5-466a-8772-d4e14f8986ee-kube-api-access-c427c\") pod \"7142ee70-31f5-466a-8772-d4e14f8986ee\" (UID: \"7142ee70-31f5-466a-8772-d4e14f8986ee\") " Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.584107 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7142ee70-31f5-466a-8772-d4e14f8986ee-kube-api-access-c427c" (OuterVolumeSpecName: "kube-api-access-c427c") pod "7142ee70-31f5-466a-8772-d4e14f8986ee" (UID: "7142ee70-31f5-466a-8772-d4e14f8986ee"). InnerVolumeSpecName "kube-api-access-c427c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.643166 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14dd41a0-6d32-4e6d-903b-a212c83ba972","Type":"ContainerStarted","Data":"d4ef8ae5dc50378c5afa1e441e5f147380f411ca57eada1c6e1e65a1da739a0f"} Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.658892 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.659780 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"7142ee70-31f5-466a-8772-d4e14f8986ee","Type":"ContainerDied","Data":"49ed0ac0a8177fbb9e74d586c21aa0fcb665890d6cd9c62dcfe7d5d339b0818c"} Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.659839 4731 scope.go:117] "RemoveContainer" containerID="f6c1670ab91512ffec7d9ee0439365a0b2385bf092e65e2f5c56860ec740dbd9" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.667496 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cc7eee5f-3ccc-4620-bc52-97e768a361b3","Type":"ContainerStarted","Data":"18352c724b57af56d75f5a641652a7a352256172f974b9630cad20974cb34279"} Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.670561 4731 generic.go:334] "Generic (PLEG): container finished" podID="9752677e-1420-41db-a27c-96995a7553b8" containerID="b4f9c33258387ad34298a546a15c1a531a121bffed73801817a46e497c702eac" exitCode=0 Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.670593 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzrd6" event={"ID":"9752677e-1420-41db-a27c-96995a7553b8","Type":"ContainerDied","Data":"b4f9c33258387ad34298a546a15c1a531a121bffed73801817a46e497c702eac"} Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.680539 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c427c\" (UniqueName: \"kubernetes.io/projected/7142ee70-31f5-466a-8772-d4e14f8986ee-kube-api-access-c427c\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.708687 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.708668102 podStartE2EDuration="3.708668102s" podCreationTimestamp="2025-11-25 15:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:37.695842948 +0000 UTC m=+1356.442977398" watchObservedRunningTime="2025-11-25 15:16:37.708668102 +0000 UTC m=+1356.455802532" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.742759 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.754026 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.779602 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:16:37 crc kubenswrapper[4731]: E1125 15:16:37.780077 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7142ee70-31f5-466a-8772-d4e14f8986ee" containerName="kube-state-metrics" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.780091 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7142ee70-31f5-466a-8772-d4e14f8986ee" containerName="kube-state-metrics" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.780279 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7142ee70-31f5-466a-8772-d4e14f8986ee" containerName="kube-state-metrics" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.781002 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.784779 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.785135 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.788679 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.884508 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/57141195-3615-4e7f-b643-96c727cb9070-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"57141195-3615-4e7f-b643-96c727cb9070\") " pod="openstack/kube-state-metrics-0" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.884858 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/57141195-3615-4e7f-b643-96c727cb9070-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"57141195-3615-4e7f-b643-96c727cb9070\") " pod="openstack/kube-state-metrics-0" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.884885 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57141195-3615-4e7f-b643-96c727cb9070-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"57141195-3615-4e7f-b643-96c727cb9070\") " pod="openstack/kube-state-metrics-0" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.884994 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7786w\" (UniqueName: \"kubernetes.io/projected/57141195-3615-4e7f-b643-96c727cb9070-kube-api-access-7786w\") pod \"kube-state-metrics-0\" (UID: \"57141195-3615-4e7f-b643-96c727cb9070\") " pod="openstack/kube-state-metrics-0" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.988321 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/57141195-3615-4e7f-b643-96c727cb9070-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"57141195-3615-4e7f-b643-96c727cb9070\") " pod="openstack/kube-state-metrics-0" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.988387 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/57141195-3615-4e7f-b643-96c727cb9070-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"57141195-3615-4e7f-b643-96c727cb9070\") " pod="openstack/kube-state-metrics-0" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.988419 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57141195-3615-4e7f-b643-96c727cb9070-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"57141195-3615-4e7f-b643-96c727cb9070\") " pod="openstack/kube-state-metrics-0" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.988508 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7786w\" (UniqueName: \"kubernetes.io/projected/57141195-3615-4e7f-b643-96c727cb9070-kube-api-access-7786w\") pod \"kube-state-metrics-0\" (UID: \"57141195-3615-4e7f-b643-96c727cb9070\") " pod="openstack/kube-state-metrics-0" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.994381 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/57141195-3615-4e7f-b643-96c727cb9070-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"57141195-3615-4e7f-b643-96c727cb9070\") " pod="openstack/kube-state-metrics-0" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.994400 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/57141195-3615-4e7f-b643-96c727cb9070-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"57141195-3615-4e7f-b643-96c727cb9070\") " pod="openstack/kube-state-metrics-0" Nov 25 15:16:37 crc kubenswrapper[4731]: I1125 15:16:37.994891 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57141195-3615-4e7f-b643-96c727cb9070-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"57141195-3615-4e7f-b643-96c727cb9070\") " pod="openstack/kube-state-metrics-0" Nov 25 15:16:38 crc kubenswrapper[4731]: I1125 15:16:38.005927 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7786w\" (UniqueName: \"kubernetes.io/projected/57141195-3615-4e7f-b643-96c727cb9070-kube-api-access-7786w\") pod \"kube-state-metrics-0\" (UID: \"57141195-3615-4e7f-b643-96c727cb9070\") " pod="openstack/kube-state-metrics-0" Nov 25 15:16:38 crc kubenswrapper[4731]: I1125 15:16:38.100077 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Nov 25 15:16:38 crc kubenswrapper[4731]: I1125 15:16:38.605785 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Nov 25 15:16:38 crc kubenswrapper[4731]: I1125 15:16:38.678851 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"57141195-3615-4e7f-b643-96c727cb9070","Type":"ContainerStarted","Data":"0cde17681f370db6c65e0bbdc9b6520eae8cd9ab61172b552426f0a0ccbc2d14"} Nov 25 15:16:38 crc kubenswrapper[4731]: I1125 15:16:38.680186 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14dd41a0-6d32-4e6d-903b-a212c83ba972","Type":"ContainerStarted","Data":"45bff675962ce738f35bb0789f3f6a3d899ac81d26b631fa2004c4c4ce58546d"} Nov 25 15:16:38 crc kubenswrapper[4731]: I1125 15:16:38.683659 4731 generic.go:334] "Generic (PLEG): container finished" podID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerID="655f64c508ddcd302953c83ec0733a1f29a0f9977a1cc74454e7c3075f6c09b5" exitCode=0 Nov 25 15:16:38 crc kubenswrapper[4731]: I1125 15:16:38.683693 4731 generic.go:334] "Generic (PLEG): container finished" podID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerID="53840393b2ad88b9de0f278671fa452dcd514d3e9acbc56599937c9a3d3b2a33" exitCode=2 Nov 25 15:16:38 crc kubenswrapper[4731]: I1125 15:16:38.683701 4731 generic.go:334] "Generic (PLEG): container finished" podID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerID="29a51530c3a5e44665af7dd1e95a87809e1470285a2ab4515f8d24a3ddd6473c" exitCode=0 Nov 25 15:16:38 crc kubenswrapper[4731]: I1125 15:16:38.683808 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7ac4656-528c-4e44-8b67-f39c0712e4ac","Type":"ContainerDied","Data":"655f64c508ddcd302953c83ec0733a1f29a0f9977a1cc74454e7c3075f6c09b5"} Nov 25 15:16:38 crc kubenswrapper[4731]: I1125 15:16:38.683861 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7ac4656-528c-4e44-8b67-f39c0712e4ac","Type":"ContainerDied","Data":"53840393b2ad88b9de0f278671fa452dcd514d3e9acbc56599937c9a3d3b2a33"} Nov 25 15:16:38 crc kubenswrapper[4731]: I1125 15:16:38.683896 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7ac4656-528c-4e44-8b67-f39c0712e4ac","Type":"ContainerDied","Data":"29a51530c3a5e44665af7dd1e95a87809e1470285a2ab4515f8d24a3ddd6473c"} Nov 25 15:16:38 crc kubenswrapper[4731]: I1125 15:16:38.707706 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.7076868960000002 podStartE2EDuration="3.707686896s" podCreationTimestamp="2025-11-25 15:16:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:38.702910526 +0000 UTC m=+1357.450044966" watchObservedRunningTime="2025-11-25 15:16:38.707686896 +0000 UTC m=+1357.454821316" Nov 25 15:16:38 crc kubenswrapper[4731]: I1125 15:16:38.822835 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7142ee70-31f5-466a-8772-d4e14f8986ee" path="/var/lib/kubelet/pods/7142ee70-31f5-466a-8772-d4e14f8986ee/volumes" Nov 25 15:16:39 crc kubenswrapper[4731]: I1125 15:16:39.695547 4731 generic.go:334] "Generic (PLEG): container finished" podID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerID="4cdc04ee1b82eaf89a9b84615d1a311cc7e5613cf04fc9be9beabbb6bd7dc4b3" exitCode=0 Nov 25 15:16:39 crc kubenswrapper[4731]: I1125 15:16:39.695618 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7ac4656-528c-4e44-8b67-f39c0712e4ac","Type":"ContainerDied","Data":"4cdc04ee1b82eaf89a9b84615d1a311cc7e5613cf04fc9be9beabbb6bd7dc4b3"} Nov 25 15:16:39 crc kubenswrapper[4731]: I1125 15:16:39.974511 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 15:16:40 crc kubenswrapper[4731]: I1125 15:16:40.882630 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 15:16:40 crc kubenswrapper[4731]: I1125 15:16:40.883008 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 15:16:41 crc kubenswrapper[4731]: I1125 15:16:41.894987 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3bfe044d-13a7-40d5-b612-b95d5d1e39ea" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:16:41 crc kubenswrapper[4731]: I1125 15:16:41.895017 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3bfe044d-13a7-40d5-b612-b95d5d1e39ea" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.372253 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.492886 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7ac4656-528c-4e44-8b67-f39c0712e4ac-run-httpd\") pod \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.493001 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-scripts\") pod \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.493059 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkv9v\" (UniqueName: \"kubernetes.io/projected/d7ac4656-528c-4e44-8b67-f39c0712e4ac-kube-api-access-dkv9v\") pod \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.493128 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7ac4656-528c-4e44-8b67-f39c0712e4ac-log-httpd\") pod \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.493234 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-config-data\") pod \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.493344 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-combined-ca-bundle\") pod \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.493418 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-sg-core-conf-yaml\") pod \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\" (UID: \"d7ac4656-528c-4e44-8b67-f39c0712e4ac\") " Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.493580 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7ac4656-528c-4e44-8b67-f39c0712e4ac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d7ac4656-528c-4e44-8b67-f39c0712e4ac" (UID: "d7ac4656-528c-4e44-8b67-f39c0712e4ac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.493683 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7ac4656-528c-4e44-8b67-f39c0712e4ac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d7ac4656-528c-4e44-8b67-f39c0712e4ac" (UID: "d7ac4656-528c-4e44-8b67-f39c0712e4ac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.493971 4731 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7ac4656-528c-4e44-8b67-f39c0712e4ac-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.498509 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-scripts" (OuterVolumeSpecName: "scripts") pod "d7ac4656-528c-4e44-8b67-f39c0712e4ac" (UID: "d7ac4656-528c-4e44-8b67-f39c0712e4ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.498585 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7ac4656-528c-4e44-8b67-f39c0712e4ac-kube-api-access-dkv9v" (OuterVolumeSpecName: "kube-api-access-dkv9v") pod "d7ac4656-528c-4e44-8b67-f39c0712e4ac" (UID: "d7ac4656-528c-4e44-8b67-f39c0712e4ac"). InnerVolumeSpecName "kube-api-access-dkv9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.531990 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d7ac4656-528c-4e44-8b67-f39c0712e4ac" (UID: "d7ac4656-528c-4e44-8b67-f39c0712e4ac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.596668 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.596710 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkv9v\" (UniqueName: \"kubernetes.io/projected/d7ac4656-528c-4e44-8b67-f39c0712e4ac-kube-api-access-dkv9v\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.596725 4731 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.596736 4731 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d7ac4656-528c-4e44-8b67-f39c0712e4ac-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.604946 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-config-data" (OuterVolumeSpecName: "config-data") pod "d7ac4656-528c-4e44-8b67-f39c0712e4ac" (UID: "d7ac4656-528c-4e44-8b67-f39c0712e4ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.613026 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7ac4656-528c-4e44-8b67-f39c0712e4ac" (UID: "d7ac4656-528c-4e44-8b67-f39c0712e4ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.698370 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.698404 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7ac4656-528c-4e44-8b67-f39c0712e4ac-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.749126 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d7ac4656-528c-4e44-8b67-f39c0712e4ac","Type":"ContainerDied","Data":"0a29aefbc1bd44c86d68f3320985d21dcab6ce58487786e96c5818e11a8b1ebf"} Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.749173 4731 scope.go:117] "RemoveContainer" containerID="655f64c508ddcd302953c83ec0733a1f29a0f9977a1cc74454e7c3075f6c09b5" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.749286 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.768666 4731 scope.go:117] "RemoveContainer" containerID="53840393b2ad88b9de0f278671fa452dcd514d3e9acbc56599937c9a3d3b2a33" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.787562 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.803190 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.813310 4731 scope.go:117] "RemoveContainer" containerID="4cdc04ee1b82eaf89a9b84615d1a311cc7e5613cf04fc9be9beabbb6bd7dc4b3" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.819602 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:43 crc kubenswrapper[4731]: E1125 15:16:43.820069 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerName="ceilometer-central-agent" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.820088 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerName="ceilometer-central-agent" Nov 25 15:16:43 crc kubenswrapper[4731]: E1125 15:16:43.820102 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerName="ceilometer-notification-agent" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.820109 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerName="ceilometer-notification-agent" Nov 25 15:16:43 crc kubenswrapper[4731]: E1125 15:16:43.820146 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerName="sg-core" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.820153 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerName="sg-core" Nov 25 15:16:43 crc kubenswrapper[4731]: E1125 15:16:43.820174 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerName="proxy-httpd" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.820180 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerName="proxy-httpd" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.820359 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerName="ceilometer-notification-agent" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.820377 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerName="sg-core" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.820394 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerName="proxy-httpd" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.820408 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" containerName="ceilometer-central-agent" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.822116 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.824933 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.826246 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.826417 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.829747 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.870330 4731 scope.go:117] "RemoveContainer" containerID="29a51530c3a5e44665af7dd1e95a87809e1470285a2ab4515f8d24a3ddd6473c" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.902505 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-scripts\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.902780 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.902902 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtqfq\" (UniqueName: \"kubernetes.io/projected/747aebcb-2e08-4d53-bdd8-bf2d94165469-kube-api-access-xtqfq\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.903049 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.903188 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747aebcb-2e08-4d53-bdd8-bf2d94165469-run-httpd\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.903306 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-config-data\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.903393 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747aebcb-2e08-4d53-bdd8-bf2d94165469-log-httpd\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:43 crc kubenswrapper[4731]: I1125 15:16:43.903534 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.006107 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.006599 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-scripts\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.006845 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.007008 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtqfq\" (UniqueName: \"kubernetes.io/projected/747aebcb-2e08-4d53-bdd8-bf2d94165469-kube-api-access-xtqfq\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.007377 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.007584 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747aebcb-2e08-4d53-bdd8-bf2d94165469-run-httpd\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.007751 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-config-data\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.007962 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747aebcb-2e08-4d53-bdd8-bf2d94165469-log-httpd\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.008261 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747aebcb-2e08-4d53-bdd8-bf2d94165469-run-httpd\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.008531 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747aebcb-2e08-4d53-bdd8-bf2d94165469-log-httpd\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.010530 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.013644 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-scripts\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.014604 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.017596 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.021958 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-config-data\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.030340 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtqfq\" (UniqueName: \"kubernetes.io/projected/747aebcb-2e08-4d53-bdd8-bf2d94165469-kube-api-access-xtqfq\") pod \"ceilometer-0\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.161647 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:16:44 crc kubenswrapper[4731]: W1125 15:16:44.615571 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod747aebcb_2e08_4d53_bdd8_bf2d94165469.slice/crio-7d5a08377366fb7a4608e0f94f6a623532d5f6625a8249957aff4bedcc739e21 WatchSource:0}: Error finding container 7d5a08377366fb7a4608e0f94f6a623532d5f6625a8249957aff4bedcc739e21: Status 404 returned error can't find the container with id 7d5a08377366fb7a4608e0f94f6a623532d5f6625a8249957aff4bedcc739e21 Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.629903 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.760103 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzrd6" event={"ID":"9752677e-1420-41db-a27c-96995a7553b8","Type":"ContainerStarted","Data":"077d56a1cfb466b61117de39f91528740697c56c15c493f4ce75cf31b8cf230b"} Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.763787 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747aebcb-2e08-4d53-bdd8-bf2d94165469","Type":"ContainerStarted","Data":"7d5a08377366fb7a4608e0f94f6a623532d5f6625a8249957aff4bedcc739e21"} Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.766211 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"57141195-3615-4e7f-b643-96c727cb9070","Type":"ContainerStarted","Data":"9c4a351c6e8974d776b893f4d364039dc595b5ae409221f7eee6c3002fb08ce1"} Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.766393 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.813780 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.752000902 podStartE2EDuration="7.813761943s" podCreationTimestamp="2025-11-25 15:16:37 +0000 UTC" firstStartedPulling="2025-11-25 15:16:38.615123124 +0000 UTC m=+1357.362257554" lastFinishedPulling="2025-11-25 15:16:43.676884165 +0000 UTC m=+1362.424018595" observedRunningTime="2025-11-25 15:16:44.800653123 +0000 UTC m=+1363.547787553" watchObservedRunningTime="2025-11-25 15:16:44.813761943 +0000 UTC m=+1363.560896363" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.823930 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7ac4656-528c-4e44-8b67-f39c0712e4ac" path="/var/lib/kubelet/pods/d7ac4656-528c-4e44-8b67-f39c0712e4ac/volumes" Nov 25 15:16:44 crc kubenswrapper[4731]: I1125 15:16:44.974155 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 15:16:45 crc kubenswrapper[4731]: I1125 15:16:45.003674 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 15:16:45 crc kubenswrapper[4731]: I1125 15:16:45.776068 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747aebcb-2e08-4d53-bdd8-bf2d94165469","Type":"ContainerStarted","Data":"87c17c85e853370d7d6119b822af2194adba90f7c8ba6ae239454259cc8eff16"} Nov 25 15:16:45 crc kubenswrapper[4731]: I1125 15:16:45.828869 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 15:16:46 crc kubenswrapper[4731]: I1125 15:16:46.027070 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 15:16:46 crc kubenswrapper[4731]: I1125 15:16:46.027146 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 15:16:46 crc kubenswrapper[4731]: I1125 15:16:46.786014 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747aebcb-2e08-4d53-bdd8-bf2d94165469","Type":"ContainerStarted","Data":"e2c774740dc744329ab80d75ac558834591cb828c7e6c2428c3e3cba84c5b028"} Nov 25 15:16:47 crc kubenswrapper[4731]: I1125 15:16:47.109111 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="14dd41a0-6d32-4e6d-903b-a212c83ba972" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:16:47 crc kubenswrapper[4731]: I1125 15:16:47.109111 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="14dd41a0-6d32-4e6d-903b-a212c83ba972" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.202:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:16:47 crc kubenswrapper[4731]: I1125 15:16:47.801002 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747aebcb-2e08-4d53-bdd8-bf2d94165469","Type":"ContainerStarted","Data":"0cdb794bde11c4788a17b90d741f19fec950b03ee0ec420c5407609a5d13f53e"} Nov 25 15:16:48 crc kubenswrapper[4731]: I1125 15:16:48.118778 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Nov 25 15:16:50 crc kubenswrapper[4731]: I1125 15:16:50.040347 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:16:50 crc kubenswrapper[4731]: I1125 15:16:50.040937 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:16:50 crc kubenswrapper[4731]: I1125 15:16:50.839217 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747aebcb-2e08-4d53-bdd8-bf2d94165469","Type":"ContainerStarted","Data":"994cbb6f31b38657f612ba3412542593b92181a6653b1dd00c5eb7d62db231ad"} Nov 25 15:16:50 crc kubenswrapper[4731]: I1125 15:16:50.839390 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:16:50 crc kubenswrapper[4731]: I1125 15:16:50.861543 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.146151297 podStartE2EDuration="7.861519192s" podCreationTimestamp="2025-11-25 15:16:43 +0000 UTC" firstStartedPulling="2025-11-25 15:16:44.618302359 +0000 UTC m=+1363.365436789" lastFinishedPulling="2025-11-25 15:16:49.333670254 +0000 UTC m=+1368.080804684" observedRunningTime="2025-11-25 15:16:50.854949557 +0000 UTC m=+1369.602084007" watchObservedRunningTime="2025-11-25 15:16:50.861519192 +0000 UTC m=+1369.608653622" Nov 25 15:16:50 crc kubenswrapper[4731]: I1125 15:16:50.887970 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 15:16:50 crc kubenswrapper[4731]: I1125 15:16:50.888048 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 15:16:50 crc kubenswrapper[4731]: I1125 15:16:50.895836 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 15:16:50 crc kubenswrapper[4731]: I1125 15:16:50.896389 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 15:16:51 crc kubenswrapper[4731]: I1125 15:16:51.851410 4731 generic.go:334] "Generic (PLEG): container finished" podID="9752677e-1420-41db-a27c-96995a7553b8" containerID="077d56a1cfb466b61117de39f91528740697c56c15c493f4ce75cf31b8cf230b" exitCode=0 Nov 25 15:16:51 crc kubenswrapper[4731]: I1125 15:16:51.851464 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzrd6" event={"ID":"9752677e-1420-41db-a27c-96995a7553b8","Type":"ContainerDied","Data":"077d56a1cfb466b61117de39f91528740697c56c15c493f4ce75cf31b8cf230b"} Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.736167 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.783360 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-294kk\" (UniqueName: \"kubernetes.io/projected/52434f5c-492f-4ee4-866b-f97a38e99fe2-kube-api-access-294kk\") pod \"52434f5c-492f-4ee4-866b-f97a38e99fe2\" (UID: \"52434f5c-492f-4ee4-866b-f97a38e99fe2\") " Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.783506 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52434f5c-492f-4ee4-866b-f97a38e99fe2-combined-ca-bundle\") pod \"52434f5c-492f-4ee4-866b-f97a38e99fe2\" (UID: \"52434f5c-492f-4ee4-866b-f97a38e99fe2\") " Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.783590 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52434f5c-492f-4ee4-866b-f97a38e99fe2-config-data\") pod \"52434f5c-492f-4ee4-866b-f97a38e99fe2\" (UID: \"52434f5c-492f-4ee4-866b-f97a38e99fe2\") " Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.790717 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52434f5c-492f-4ee4-866b-f97a38e99fe2-kube-api-access-294kk" (OuterVolumeSpecName: "kube-api-access-294kk") pod "52434f5c-492f-4ee4-866b-f97a38e99fe2" (UID: "52434f5c-492f-4ee4-866b-f97a38e99fe2"). InnerVolumeSpecName "kube-api-access-294kk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.817354 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52434f5c-492f-4ee4-866b-f97a38e99fe2-config-data" (OuterVolumeSpecName: "config-data") pod "52434f5c-492f-4ee4-866b-f97a38e99fe2" (UID: "52434f5c-492f-4ee4-866b-f97a38e99fe2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.818236 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52434f5c-492f-4ee4-866b-f97a38e99fe2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52434f5c-492f-4ee4-866b-f97a38e99fe2" (UID: "52434f5c-492f-4ee4-866b-f97a38e99fe2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.863431 4731 generic.go:334] "Generic (PLEG): container finished" podID="52434f5c-492f-4ee4-866b-f97a38e99fe2" containerID="de57e3c2bbcf3355c4b462af2f26ead6a1b0c2696e476db4a1ca419e840c8939" exitCode=137 Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.863522 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.863491 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"52434f5c-492f-4ee4-866b-f97a38e99fe2","Type":"ContainerDied","Data":"de57e3c2bbcf3355c4b462af2f26ead6a1b0c2696e476db4a1ca419e840c8939"} Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.863982 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"52434f5c-492f-4ee4-866b-f97a38e99fe2","Type":"ContainerDied","Data":"5651ba340d97137ed914f266784e5a982bfc266422cc007405999cfc4c453623"} Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.864009 4731 scope.go:117] "RemoveContainer" containerID="de57e3c2bbcf3355c4b462af2f26ead6a1b0c2696e476db4a1ca419e840c8939" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.867949 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzrd6" event={"ID":"9752677e-1420-41db-a27c-96995a7553b8","Type":"ContainerStarted","Data":"f6353d8ba163bf214b5d0715bac4569d51ee99df32797590988938d0ce3c72bd"} Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.886100 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-294kk\" (UniqueName: \"kubernetes.io/projected/52434f5c-492f-4ee4-866b-f97a38e99fe2-kube-api-access-294kk\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.886411 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52434f5c-492f-4ee4-866b-f97a38e99fe2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.886473 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52434f5c-492f-4ee4-866b-f97a38e99fe2-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.888648 4731 scope.go:117] "RemoveContainer" containerID="de57e3c2bbcf3355c4b462af2f26ead6a1b0c2696e476db4a1ca419e840c8939" Nov 25 15:16:52 crc kubenswrapper[4731]: E1125 15:16:52.889795 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de57e3c2bbcf3355c4b462af2f26ead6a1b0c2696e476db4a1ca419e840c8939\": container with ID starting with de57e3c2bbcf3355c4b462af2f26ead6a1b0c2696e476db4a1ca419e840c8939 not found: ID does not exist" containerID="de57e3c2bbcf3355c4b462af2f26ead6a1b0c2696e476db4a1ca419e840c8939" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.889863 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de57e3c2bbcf3355c4b462af2f26ead6a1b0c2696e476db4a1ca419e840c8939"} err="failed to get container status \"de57e3c2bbcf3355c4b462af2f26ead6a1b0c2696e476db4a1ca419e840c8939\": rpc error: code = NotFound desc = could not find container \"de57e3c2bbcf3355c4b462af2f26ead6a1b0c2696e476db4a1ca419e840c8939\": container with ID starting with de57e3c2bbcf3355c4b462af2f26ead6a1b0c2696e476db4a1ca419e840c8939 not found: ID does not exist" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.909230 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dzrd6" podStartSLOduration=4.257680691 podStartE2EDuration="18.909204925s" podCreationTimestamp="2025-11-25 15:16:34 +0000 UTC" firstStartedPulling="2025-11-25 15:16:37.672568721 +0000 UTC m=+1356.419703141" lastFinishedPulling="2025-11-25 15:16:52.324092935 +0000 UTC m=+1371.071227375" observedRunningTime="2025-11-25 15:16:52.890218056 +0000 UTC m=+1371.637352506" watchObservedRunningTime="2025-11-25 15:16:52.909204925 +0000 UTC m=+1371.656339355" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.919760 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.943161 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.955577 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:16:52 crc kubenswrapper[4731]: E1125 15:16:52.967219 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52434f5c-492f-4ee4-866b-f97a38e99fe2" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.967277 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="52434f5c-492f-4ee4-866b-f97a38e99fe2" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.968144 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="52434f5c-492f-4ee4-866b-f97a38e99fe2" containerName="nova-cell1-novncproxy-novncproxy" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.975677 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.980226 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.980322 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.982211 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Nov 25 15:16:52 crc kubenswrapper[4731]: I1125 15:16:52.990780 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.092218 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsdgc\" (UniqueName: \"kubernetes.io/projected/d1ab61a1-85f7-4bf9-93f8-598cc80c8283-kube-api-access-jsdgc\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1ab61a1-85f7-4bf9-93f8-598cc80c8283\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.092300 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1ab61a1-85f7-4bf9-93f8-598cc80c8283-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1ab61a1-85f7-4bf9-93f8-598cc80c8283\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.092335 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ab61a1-85f7-4bf9-93f8-598cc80c8283-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1ab61a1-85f7-4bf9-93f8-598cc80c8283\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.092414 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1ab61a1-85f7-4bf9-93f8-598cc80c8283-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1ab61a1-85f7-4bf9-93f8-598cc80c8283\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.092529 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ab61a1-85f7-4bf9-93f8-598cc80c8283-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1ab61a1-85f7-4bf9-93f8-598cc80c8283\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.194562 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1ab61a1-85f7-4bf9-93f8-598cc80c8283-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1ab61a1-85f7-4bf9-93f8-598cc80c8283\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.194671 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ab61a1-85f7-4bf9-93f8-598cc80c8283-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1ab61a1-85f7-4bf9-93f8-598cc80c8283\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.194774 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsdgc\" (UniqueName: \"kubernetes.io/projected/d1ab61a1-85f7-4bf9-93f8-598cc80c8283-kube-api-access-jsdgc\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1ab61a1-85f7-4bf9-93f8-598cc80c8283\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.194833 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1ab61a1-85f7-4bf9-93f8-598cc80c8283-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1ab61a1-85f7-4bf9-93f8-598cc80c8283\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.194861 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ab61a1-85f7-4bf9-93f8-598cc80c8283-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1ab61a1-85f7-4bf9-93f8-598cc80c8283\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.198837 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1ab61a1-85f7-4bf9-93f8-598cc80c8283-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1ab61a1-85f7-4bf9-93f8-598cc80c8283\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.198916 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1ab61a1-85f7-4bf9-93f8-598cc80c8283-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1ab61a1-85f7-4bf9-93f8-598cc80c8283\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.199071 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1ab61a1-85f7-4bf9-93f8-598cc80c8283-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1ab61a1-85f7-4bf9-93f8-598cc80c8283\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.200373 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1ab61a1-85f7-4bf9-93f8-598cc80c8283-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1ab61a1-85f7-4bf9-93f8-598cc80c8283\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.211508 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsdgc\" (UniqueName: \"kubernetes.io/projected/d1ab61a1-85f7-4bf9-93f8-598cc80c8283-kube-api-access-jsdgc\") pod \"nova-cell1-novncproxy-0\" (UID: \"d1ab61a1-85f7-4bf9-93f8-598cc80c8283\") " pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.296526 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.786085 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Nov 25 15:16:53 crc kubenswrapper[4731]: W1125 15:16:53.796396 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1ab61a1_85f7_4bf9_93f8_598cc80c8283.slice/crio-250b575ff3765b17f2c763f3bcd11350e0f4f99df96532e8ccfed8c0b5598bee WatchSource:0}: Error finding container 250b575ff3765b17f2c763f3bcd11350e0f4f99df96532e8ccfed8c0b5598bee: Status 404 returned error can't find the container with id 250b575ff3765b17f2c763f3bcd11350e0f4f99df96532e8ccfed8c0b5598bee Nov 25 15:16:53 crc kubenswrapper[4731]: I1125 15:16:53.883676 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1ab61a1-85f7-4bf9-93f8-598cc80c8283","Type":"ContainerStarted","Data":"250b575ff3765b17f2c763f3bcd11350e0f4f99df96532e8ccfed8c0b5598bee"} Nov 25 15:16:54 crc kubenswrapper[4731]: I1125 15:16:54.830269 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52434f5c-492f-4ee4-866b-f97a38e99fe2" path="/var/lib/kubelet/pods/52434f5c-492f-4ee4-866b-f97a38e99fe2/volumes" Nov 25 15:16:54 crc kubenswrapper[4731]: I1125 15:16:54.897515 4731 generic.go:334] "Generic (PLEG): container finished" podID="c717723d-912c-4e6d-add5-f7f1628cb069" containerID="77f20e12ba2b6aedc665350a94f95303a2f00f9c4da605142cabca833bc9658a" exitCode=0 Nov 25 15:16:54 crc kubenswrapper[4731]: I1125 15:16:54.897806 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nwdc4" event={"ID":"c717723d-912c-4e6d-add5-f7f1628cb069","Type":"ContainerDied","Data":"77f20e12ba2b6aedc665350a94f95303a2f00f9c4da605142cabca833bc9658a"} Nov 25 15:16:54 crc kubenswrapper[4731]: I1125 15:16:54.899591 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"d1ab61a1-85f7-4bf9-93f8-598cc80c8283","Type":"ContainerStarted","Data":"6657652fad6d3c538d42e1031775bf0101fce92aef44930acbca52025b276c24"} Nov 25 15:16:54 crc kubenswrapper[4731]: I1125 15:16:54.906747 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:16:54 crc kubenswrapper[4731]: I1125 15:16:54.906798 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:16:54 crc kubenswrapper[4731]: I1125 15:16:54.940480 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.940458853 podStartE2EDuration="2.940458853s" podCreationTimestamp="2025-11-25 15:16:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:54.930426621 +0000 UTC m=+1373.677561051" watchObservedRunningTime="2025-11-25 15:16:54.940458853 +0000 UTC m=+1373.687593293" Nov 25 15:16:55 crc kubenswrapper[4731]: I1125 15:16:55.958589 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dzrd6" podUID="9752677e-1420-41db-a27c-96995a7553b8" containerName="registry-server" probeResult="failure" output=< Nov 25 15:16:55 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 15:16:55 crc kubenswrapper[4731]: > Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.040697 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.040804 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.042008 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.042040 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.046217 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.050786 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.247549 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-q2h5f"] Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.249956 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.255646 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-q2h5f"] Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.324594 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nwdc4" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.365506 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6h22\" (UniqueName: \"kubernetes.io/projected/c717723d-912c-4e6d-add5-f7f1628cb069-kube-api-access-m6h22\") pod \"c717723d-912c-4e6d-add5-f7f1628cb069\" (UID: \"c717723d-912c-4e6d-add5-f7f1628cb069\") " Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.365670 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-scripts\") pod \"c717723d-912c-4e6d-add5-f7f1628cb069\" (UID: \"c717723d-912c-4e6d-add5-f7f1628cb069\") " Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.365729 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-config-data\") pod \"c717723d-912c-4e6d-add5-f7f1628cb069\" (UID: \"c717723d-912c-4e6d-add5-f7f1628cb069\") " Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.365779 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-combined-ca-bundle\") pod \"c717723d-912c-4e6d-add5-f7f1628cb069\" (UID: \"c717723d-912c-4e6d-add5-f7f1628cb069\") " Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.366093 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.366149 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.366197 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-config\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.366217 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.366278 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.366331 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gmrn\" (UniqueName: \"kubernetes.io/projected/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-kube-api-access-6gmrn\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.372570 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-scripts" (OuterVolumeSpecName: "scripts") pod "c717723d-912c-4e6d-add5-f7f1628cb069" (UID: "c717723d-912c-4e6d-add5-f7f1628cb069"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.396999 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c717723d-912c-4e6d-add5-f7f1628cb069-kube-api-access-m6h22" (OuterVolumeSpecName: "kube-api-access-m6h22") pod "c717723d-912c-4e6d-add5-f7f1628cb069" (UID: "c717723d-912c-4e6d-add5-f7f1628cb069"). InnerVolumeSpecName "kube-api-access-m6h22". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.414029 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c717723d-912c-4e6d-add5-f7f1628cb069" (UID: "c717723d-912c-4e6d-add5-f7f1628cb069"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.425929 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-config-data" (OuterVolumeSpecName: "config-data") pod "c717723d-912c-4e6d-add5-f7f1628cb069" (UID: "c717723d-912c-4e6d-add5-f7f1628cb069"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.470128 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.470218 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.470284 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-config\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.470727 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.470862 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.470934 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gmrn\" (UniqueName: \"kubernetes.io/projected/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-kube-api-access-6gmrn\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.471054 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.471069 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.471082 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6h22\" (UniqueName: \"kubernetes.io/projected/c717723d-912c-4e6d-add5-f7f1628cb069-kube-api-access-m6h22\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.471094 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c717723d-912c-4e6d-add5-f7f1628cb069-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.471794 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.471930 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.471972 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-config\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.471945 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.472380 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.493830 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gmrn\" (UniqueName: \"kubernetes.io/projected/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-kube-api-access-6gmrn\") pod \"dnsmasq-dns-f84f9ccf-q2h5f\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.646762 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.917641 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-nwdc4" Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.917987 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-nwdc4" event={"ID":"c717723d-912c-4e6d-add5-f7f1628cb069","Type":"ContainerDied","Data":"275401f4b6f4fc7455f88cf9abb1490afade33cacd3030741c3f950f575ddd94"} Nov 25 15:16:56 crc kubenswrapper[4731]: I1125 15:16:56.918042 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="275401f4b6f4fc7455f88cf9abb1490afade33cacd3030741c3f950f575ddd94" Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.035411 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 15:16:57 crc kubenswrapper[4731]: E1125 15:16:57.035892 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c717723d-912c-4e6d-add5-f7f1628cb069" containerName="nova-cell1-conductor-db-sync" Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.035905 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c717723d-912c-4e6d-add5-f7f1628cb069" containerName="nova-cell1-conductor-db-sync" Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.036133 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="c717723d-912c-4e6d-add5-f7f1628cb069" containerName="nova-cell1-conductor-db-sync" Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.036865 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.041043 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.050980 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.089446 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc2b506-8d10-4cfc-ae43-0978926809ac-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cfc2b506-8d10-4cfc-ae43-0978926809ac\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.089536 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlvch\" (UniqueName: \"kubernetes.io/projected/cfc2b506-8d10-4cfc-ae43-0978926809ac-kube-api-access-xlvch\") pod \"nova-cell1-conductor-0\" (UID: \"cfc2b506-8d10-4cfc-ae43-0978926809ac\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.089622 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc2b506-8d10-4cfc-ae43-0978926809ac-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cfc2b506-8d10-4cfc-ae43-0978926809ac\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.122118 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-q2h5f"] Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.190874 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc2b506-8d10-4cfc-ae43-0978926809ac-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cfc2b506-8d10-4cfc-ae43-0978926809ac\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.191214 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc2b506-8d10-4cfc-ae43-0978926809ac-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cfc2b506-8d10-4cfc-ae43-0978926809ac\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.191302 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlvch\" (UniqueName: \"kubernetes.io/projected/cfc2b506-8d10-4cfc-ae43-0978926809ac-kube-api-access-xlvch\") pod \"nova-cell1-conductor-0\" (UID: \"cfc2b506-8d10-4cfc-ae43-0978926809ac\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.196023 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cfc2b506-8d10-4cfc-ae43-0978926809ac-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"cfc2b506-8d10-4cfc-ae43-0978926809ac\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.199278 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cfc2b506-8d10-4cfc-ae43-0978926809ac-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"cfc2b506-8d10-4cfc-ae43-0978926809ac\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.216886 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlvch\" (UniqueName: \"kubernetes.io/projected/cfc2b506-8d10-4cfc-ae43-0978926809ac-kube-api-access-xlvch\") pod \"nova-cell1-conductor-0\" (UID: \"cfc2b506-8d10-4cfc-ae43-0978926809ac\") " pod="openstack/nova-cell1-conductor-0" Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.357760 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.856246 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.926666 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"cfc2b506-8d10-4cfc-ae43-0978926809ac","Type":"ContainerStarted","Data":"a2bdee9ccb6bed2d22dfe998994e3c93c918076ba81a5bb819a2a403108d81a1"} Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.929349 4731 generic.go:334] "Generic (PLEG): container finished" podID="1586ab0c-bd05-40c9-80f3-7d43e8ef3dee" containerID="deef86bfab59b86efcd7f4fd4bc95ed18be374d9085529deddf629a416572939" exitCode=0 Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.929590 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" event={"ID":"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee","Type":"ContainerDied","Data":"deef86bfab59b86efcd7f4fd4bc95ed18be374d9085529deddf629a416572939"} Nov 25 15:16:57 crc kubenswrapper[4731]: I1125 15:16:57.929743 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" event={"ID":"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee","Type":"ContainerStarted","Data":"e706099891e1470d7d827f8af9b30e7f6173786030a0636d0679f765ede8ca6c"} Nov 25 15:16:58 crc kubenswrapper[4731]: I1125 15:16:58.296830 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:16:58 crc kubenswrapper[4731]: I1125 15:16:58.700035 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:16:58 crc kubenswrapper[4731]: I1125 15:16:58.769678 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:16:58 crc kubenswrapper[4731]: I1125 15:16:58.770000 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerName="ceilometer-central-agent" containerID="cri-o://87c17c85e853370d7d6119b822af2194adba90f7c8ba6ae239454259cc8eff16" gracePeriod=30 Nov 25 15:16:58 crc kubenswrapper[4731]: I1125 15:16:58.770064 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerName="sg-core" containerID="cri-o://0cdb794bde11c4788a17b90d741f19fec950b03ee0ec420c5407609a5d13f53e" gracePeriod=30 Nov 25 15:16:58 crc kubenswrapper[4731]: I1125 15:16:58.770064 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerName="proxy-httpd" containerID="cri-o://994cbb6f31b38657f612ba3412542593b92181a6653b1dd00c5eb7d62db231ad" gracePeriod=30 Nov 25 15:16:58 crc kubenswrapper[4731]: I1125 15:16:58.771593 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerName="ceilometer-notification-agent" containerID="cri-o://e2c774740dc744329ab80d75ac558834591cb828c7e6c2428c3e3cba84c5b028" gracePeriod=30 Nov 25 15:16:58 crc kubenswrapper[4731]: I1125 15:16:58.958841 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"cfc2b506-8d10-4cfc-ae43-0978926809ac","Type":"ContainerStarted","Data":"f11ea4de7fcb0b89565337f0e7a46f028ad6732bb65eb39bba0714b96e282d37"} Nov 25 15:16:58 crc kubenswrapper[4731]: I1125 15:16:58.969197 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="14dd41a0-6d32-4e6d-903b-a212c83ba972" containerName="nova-api-log" containerID="cri-o://d4ef8ae5dc50378c5afa1e441e5f147380f411ca57eada1c6e1e65a1da739a0f" gracePeriod=30 Nov 25 15:16:58 crc kubenswrapper[4731]: I1125 15:16:58.970590 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" event={"ID":"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee","Type":"ContainerStarted","Data":"14edbb1526f342225e8bb4a48d6e77e2db598fce1e09dea6d28510f03bd97d81"} Nov 25 15:16:58 crc kubenswrapper[4731]: I1125 15:16:58.970657 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Nov 25 15:16:58 crc kubenswrapper[4731]: I1125 15:16:58.970789 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="14dd41a0-6d32-4e6d-903b-a212c83ba972" containerName="nova-api-api" containerID="cri-o://45bff675962ce738f35bb0789f3f6a3d899ac81d26b631fa2004c4c4ce58546d" gracePeriod=30 Nov 25 15:16:58 crc kubenswrapper[4731]: I1125 15:16:58.972618 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:16:59 crc kubenswrapper[4731]: I1125 15:16:59.029217 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.029188082 podStartE2EDuration="3.029188082s" podCreationTimestamp="2025-11-25 15:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:58.981007418 +0000 UTC m=+1377.728141848" watchObservedRunningTime="2025-11-25 15:16:59.029188082 +0000 UTC m=+1377.776322512" Nov 25 15:16:59 crc kubenswrapper[4731]: I1125 15:16:59.043397 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" podStartSLOduration=3.043372629 podStartE2EDuration="3.043372629s" podCreationTimestamp="2025-11-25 15:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:16:59.008628664 +0000 UTC m=+1377.755763094" watchObservedRunningTime="2025-11-25 15:16:59.043372629 +0000 UTC m=+1377.790507059" Nov 25 15:16:59 crc kubenswrapper[4731]: I1125 15:16:59.980296 4731 generic.go:334] "Generic (PLEG): container finished" podID="14dd41a0-6d32-4e6d-903b-a212c83ba972" containerID="d4ef8ae5dc50378c5afa1e441e5f147380f411ca57eada1c6e1e65a1da739a0f" exitCode=143 Nov 25 15:16:59 crc kubenswrapper[4731]: I1125 15:16:59.980404 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14dd41a0-6d32-4e6d-903b-a212c83ba972","Type":"ContainerDied","Data":"d4ef8ae5dc50378c5afa1e441e5f147380f411ca57eada1c6e1e65a1da739a0f"} Nov 25 15:16:59 crc kubenswrapper[4731]: I1125 15:16:59.984181 4731 generic.go:334] "Generic (PLEG): container finished" podID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerID="994cbb6f31b38657f612ba3412542593b92181a6653b1dd00c5eb7d62db231ad" exitCode=0 Nov 25 15:16:59 crc kubenswrapper[4731]: I1125 15:16:59.984199 4731 generic.go:334] "Generic (PLEG): container finished" podID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerID="0cdb794bde11c4788a17b90d741f19fec950b03ee0ec420c5407609a5d13f53e" exitCode=2 Nov 25 15:16:59 crc kubenswrapper[4731]: I1125 15:16:59.984205 4731 generic.go:334] "Generic (PLEG): container finished" podID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerID="87c17c85e853370d7d6119b822af2194adba90f7c8ba6ae239454259cc8eff16" exitCode=0 Nov 25 15:16:59 crc kubenswrapper[4731]: I1125 15:16:59.984236 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747aebcb-2e08-4d53-bdd8-bf2d94165469","Type":"ContainerDied","Data":"994cbb6f31b38657f612ba3412542593b92181a6653b1dd00c5eb7d62db231ad"} Nov 25 15:16:59 crc kubenswrapper[4731]: I1125 15:16:59.984277 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747aebcb-2e08-4d53-bdd8-bf2d94165469","Type":"ContainerDied","Data":"0cdb794bde11c4788a17b90d741f19fec950b03ee0ec420c5407609a5d13f53e"} Nov 25 15:16:59 crc kubenswrapper[4731]: I1125 15:16:59.984287 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747aebcb-2e08-4d53-bdd8-bf2d94165469","Type":"ContainerDied","Data":"87c17c85e853370d7d6119b822af2194adba90f7c8ba6ae239454259cc8eff16"} Nov 25 15:17:02 crc kubenswrapper[4731]: I1125 15:17:02.572523 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:17:02 crc kubenswrapper[4731]: I1125 15:17:02.706777 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14dd41a0-6d32-4e6d-903b-a212c83ba972-logs\") pod \"14dd41a0-6d32-4e6d-903b-a212c83ba972\" (UID: \"14dd41a0-6d32-4e6d-903b-a212c83ba972\") " Nov 25 15:17:02 crc kubenswrapper[4731]: I1125 15:17:02.706950 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2mb2\" (UniqueName: \"kubernetes.io/projected/14dd41a0-6d32-4e6d-903b-a212c83ba972-kube-api-access-x2mb2\") pod \"14dd41a0-6d32-4e6d-903b-a212c83ba972\" (UID: \"14dd41a0-6d32-4e6d-903b-a212c83ba972\") " Nov 25 15:17:02 crc kubenswrapper[4731]: I1125 15:17:02.707302 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14dd41a0-6d32-4e6d-903b-a212c83ba972-logs" (OuterVolumeSpecName: "logs") pod "14dd41a0-6d32-4e6d-903b-a212c83ba972" (UID: "14dd41a0-6d32-4e6d-903b-a212c83ba972"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:17:02 crc kubenswrapper[4731]: I1125 15:17:02.708116 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dd41a0-6d32-4e6d-903b-a212c83ba972-combined-ca-bundle\") pod \"14dd41a0-6d32-4e6d-903b-a212c83ba972\" (UID: \"14dd41a0-6d32-4e6d-903b-a212c83ba972\") " Nov 25 15:17:02 crc kubenswrapper[4731]: I1125 15:17:02.708204 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14dd41a0-6d32-4e6d-903b-a212c83ba972-config-data\") pod \"14dd41a0-6d32-4e6d-903b-a212c83ba972\" (UID: \"14dd41a0-6d32-4e6d-903b-a212c83ba972\") " Nov 25 15:17:02 crc kubenswrapper[4731]: I1125 15:17:02.709267 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/14dd41a0-6d32-4e6d-903b-a212c83ba972-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:02 crc kubenswrapper[4731]: I1125 15:17:02.714150 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14dd41a0-6d32-4e6d-903b-a212c83ba972-kube-api-access-x2mb2" (OuterVolumeSpecName: "kube-api-access-x2mb2") pod "14dd41a0-6d32-4e6d-903b-a212c83ba972" (UID: "14dd41a0-6d32-4e6d-903b-a212c83ba972"). InnerVolumeSpecName "kube-api-access-x2mb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:02 crc kubenswrapper[4731]: I1125 15:17:02.739930 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14dd41a0-6d32-4e6d-903b-a212c83ba972-config-data" (OuterVolumeSpecName: "config-data") pod "14dd41a0-6d32-4e6d-903b-a212c83ba972" (UID: "14dd41a0-6d32-4e6d-903b-a212c83ba972"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:02 crc kubenswrapper[4731]: I1125 15:17:02.745986 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14dd41a0-6d32-4e6d-903b-a212c83ba972-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14dd41a0-6d32-4e6d-903b-a212c83ba972" (UID: "14dd41a0-6d32-4e6d-903b-a212c83ba972"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:02 crc kubenswrapper[4731]: I1125 15:17:02.810860 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2mb2\" (UniqueName: \"kubernetes.io/projected/14dd41a0-6d32-4e6d-903b-a212c83ba972-kube-api-access-x2mb2\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:02 crc kubenswrapper[4731]: I1125 15:17:02.810891 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14dd41a0-6d32-4e6d-903b-a212c83ba972-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:02 crc kubenswrapper[4731]: I1125 15:17:02.810899 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14dd41a0-6d32-4e6d-903b-a212c83ba972-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.022183 4731 generic.go:334] "Generic (PLEG): container finished" podID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerID="e2c774740dc744329ab80d75ac558834591cb828c7e6c2428c3e3cba84c5b028" exitCode=0 Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.022269 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747aebcb-2e08-4d53-bdd8-bf2d94165469","Type":"ContainerDied","Data":"e2c774740dc744329ab80d75ac558834591cb828c7e6c2428c3e3cba84c5b028"} Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.028024 4731 generic.go:334] "Generic (PLEG): container finished" podID="14dd41a0-6d32-4e6d-903b-a212c83ba972" containerID="45bff675962ce738f35bb0789f3f6a3d899ac81d26b631fa2004c4c4ce58546d" exitCode=0 Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.028058 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14dd41a0-6d32-4e6d-903b-a212c83ba972","Type":"ContainerDied","Data":"45bff675962ce738f35bb0789f3f6a3d899ac81d26b631fa2004c4c4ce58546d"} Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.028079 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"14dd41a0-6d32-4e6d-903b-a212c83ba972","Type":"ContainerDied","Data":"7e7f597145b7bb51aeecccb5b6e2a11ecfc321d6d425f2c40a48bad20a8258af"} Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.028100 4731 scope.go:117] "RemoveContainer" containerID="45bff675962ce738f35bb0789f3f6a3d899ac81d26b631fa2004c4c4ce58546d" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.028259 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.084791 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.091569 4731 scope.go:117] "RemoveContainer" containerID="d4ef8ae5dc50378c5afa1e441e5f147380f411ca57eada1c6e1e65a1da739a0f" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.098589 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.119346 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 15:17:03 crc kubenswrapper[4731]: E1125 15:17:03.119918 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14dd41a0-6d32-4e6d-903b-a212c83ba972" containerName="nova-api-log" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.120020 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="14dd41a0-6d32-4e6d-903b-a212c83ba972" containerName="nova-api-log" Nov 25 15:17:03 crc kubenswrapper[4731]: E1125 15:17:03.120145 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14dd41a0-6d32-4e6d-903b-a212c83ba972" containerName="nova-api-api" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.120214 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="14dd41a0-6d32-4e6d-903b-a212c83ba972" containerName="nova-api-api" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.120535 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="14dd41a0-6d32-4e6d-903b-a212c83ba972" containerName="nova-api-log" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.120624 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="14dd41a0-6d32-4e6d-903b-a212c83ba972" containerName="nova-api-api" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.121916 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.124581 4731 scope.go:117] "RemoveContainer" containerID="45bff675962ce738f35bb0789f3f6a3d899ac81d26b631fa2004c4c4ce58546d" Nov 25 15:17:03 crc kubenswrapper[4731]: E1125 15:17:03.131635 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45bff675962ce738f35bb0789f3f6a3d899ac81d26b631fa2004c4c4ce58546d\": container with ID starting with 45bff675962ce738f35bb0789f3f6a3d899ac81d26b631fa2004c4c4ce58546d not found: ID does not exist" containerID="45bff675962ce738f35bb0789f3f6a3d899ac81d26b631fa2004c4c4ce58546d" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.131690 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45bff675962ce738f35bb0789f3f6a3d899ac81d26b631fa2004c4c4ce58546d"} err="failed to get container status \"45bff675962ce738f35bb0789f3f6a3d899ac81d26b631fa2004c4c4ce58546d\": rpc error: code = NotFound desc = could not find container \"45bff675962ce738f35bb0789f3f6a3d899ac81d26b631fa2004c4c4ce58546d\": container with ID starting with 45bff675962ce738f35bb0789f3f6a3d899ac81d26b631fa2004c4c4ce58546d not found: ID does not exist" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.131719 4731 scope.go:117] "RemoveContainer" containerID="d4ef8ae5dc50378c5afa1e441e5f147380f411ca57eada1c6e1e65a1da739a0f" Nov 25 15:17:03 crc kubenswrapper[4731]: E1125 15:17:03.132412 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4ef8ae5dc50378c5afa1e441e5f147380f411ca57eada1c6e1e65a1da739a0f\": container with ID starting with d4ef8ae5dc50378c5afa1e441e5f147380f411ca57eada1c6e1e65a1da739a0f not found: ID does not exist" containerID="d4ef8ae5dc50378c5afa1e441e5f147380f411ca57eada1c6e1e65a1da739a0f" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.132541 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4ef8ae5dc50378c5afa1e441e5f147380f411ca57eada1c6e1e65a1da739a0f"} err="failed to get container status \"d4ef8ae5dc50378c5afa1e441e5f147380f411ca57eada1c6e1e65a1da739a0f\": rpc error: code = NotFound desc = could not find container \"d4ef8ae5dc50378c5afa1e441e5f147380f411ca57eada1c6e1e65a1da739a0f\": container with ID starting with d4ef8ae5dc50378c5afa1e441e5f147380f411ca57eada1c6e1e65a1da739a0f not found: ID does not exist" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.132761 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.132858 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.132965 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.140922 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.218384 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.218677 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-config-data\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.218853 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b28d9f-757a-452e-8e77-d726b697a64d-logs\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.218972 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-public-tls-certs\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.219166 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.219288 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4h76\" (UniqueName: \"kubernetes.io/projected/75b28d9f-757a-452e-8e77-d726b697a64d-kube-api-access-h4h76\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.296771 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.316967 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.321008 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.321074 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4h76\" (UniqueName: \"kubernetes.io/projected/75b28d9f-757a-452e-8e77-d726b697a64d-kube-api-access-h4h76\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.321122 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.321171 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-config-data\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.321220 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b28d9f-757a-452e-8e77-d726b697a64d-logs\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.321256 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-public-tls-certs\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.321881 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b28d9f-757a-452e-8e77-d726b697a64d-logs\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.325356 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-public-tls-certs\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.325957 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.326173 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-config-data\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.326173 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.339617 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4h76\" (UniqueName: \"kubernetes.io/projected/75b28d9f-757a-452e-8e77-d726b697a64d-kube-api-access-h4h76\") pod \"nova-api-0\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.454702 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.779987 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.834484 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-combined-ca-bundle\") pod \"747aebcb-2e08-4d53-bdd8-bf2d94165469\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.835310 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtqfq\" (UniqueName: \"kubernetes.io/projected/747aebcb-2e08-4d53-bdd8-bf2d94165469-kube-api-access-xtqfq\") pod \"747aebcb-2e08-4d53-bdd8-bf2d94165469\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.836180 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747aebcb-2e08-4d53-bdd8-bf2d94165469-run-httpd\") pod \"747aebcb-2e08-4d53-bdd8-bf2d94165469\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.836297 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-ceilometer-tls-certs\") pod \"747aebcb-2e08-4d53-bdd8-bf2d94165469\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.836600 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-config-data\") pod \"747aebcb-2e08-4d53-bdd8-bf2d94165469\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.836650 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/747aebcb-2e08-4d53-bdd8-bf2d94165469-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "747aebcb-2e08-4d53-bdd8-bf2d94165469" (UID: "747aebcb-2e08-4d53-bdd8-bf2d94165469"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.836706 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-sg-core-conf-yaml\") pod \"747aebcb-2e08-4d53-bdd8-bf2d94165469\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.836768 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-scripts\") pod \"747aebcb-2e08-4d53-bdd8-bf2d94165469\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.836864 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747aebcb-2e08-4d53-bdd8-bf2d94165469-log-httpd\") pod \"747aebcb-2e08-4d53-bdd8-bf2d94165469\" (UID: \"747aebcb-2e08-4d53-bdd8-bf2d94165469\") " Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.837604 4731 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747aebcb-2e08-4d53-bdd8-bf2d94165469-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.840381 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/747aebcb-2e08-4d53-bdd8-bf2d94165469-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "747aebcb-2e08-4d53-bdd8-bf2d94165469" (UID: "747aebcb-2e08-4d53-bdd8-bf2d94165469"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.840934 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/747aebcb-2e08-4d53-bdd8-bf2d94165469-kube-api-access-xtqfq" (OuterVolumeSpecName: "kube-api-access-xtqfq") pod "747aebcb-2e08-4d53-bdd8-bf2d94165469" (UID: "747aebcb-2e08-4d53-bdd8-bf2d94165469"). InnerVolumeSpecName "kube-api-access-xtqfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.842984 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-scripts" (OuterVolumeSpecName: "scripts") pod "747aebcb-2e08-4d53-bdd8-bf2d94165469" (UID: "747aebcb-2e08-4d53-bdd8-bf2d94165469"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.888727 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "747aebcb-2e08-4d53-bdd8-bf2d94165469" (UID: "747aebcb-2e08-4d53-bdd8-bf2d94165469"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.907533 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "747aebcb-2e08-4d53-bdd8-bf2d94165469" (UID: "747aebcb-2e08-4d53-bdd8-bf2d94165469"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.939212 4731 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.939244 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.939256 4731 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/747aebcb-2e08-4d53-bdd8-bf2d94165469-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.939268 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtqfq\" (UniqueName: \"kubernetes.io/projected/747aebcb-2e08-4d53-bdd8-bf2d94165469-kube-api-access-xtqfq\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.939281 4731 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.951089 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-config-data" (OuterVolumeSpecName: "config-data") pod "747aebcb-2e08-4d53-bdd8-bf2d94165469" (UID: "747aebcb-2e08-4d53-bdd8-bf2d94165469"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:03 crc kubenswrapper[4731]: I1125 15:17:03.951186 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "747aebcb-2e08-4d53-bdd8-bf2d94165469" (UID: "747aebcb-2e08-4d53-bdd8-bf2d94165469"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.026789 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.039721 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75b28d9f-757a-452e-8e77-d726b697a64d","Type":"ContainerStarted","Data":"e01cf396e1e9183360e2c13fc5a05e903b5d90308726abbaeaac4abd77af7390"} Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.040747 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.040777 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/747aebcb-2e08-4d53-bdd8-bf2d94165469-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.042845 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"747aebcb-2e08-4d53-bdd8-bf2d94165469","Type":"ContainerDied","Data":"7d5a08377366fb7a4608e0f94f6a623532d5f6625a8249957aff4bedcc739e21"} Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.042893 4731 scope.go:117] "RemoveContainer" containerID="994cbb6f31b38657f612ba3412542593b92181a6653b1dd00c5eb7d62db231ad" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.042902 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.061955 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.072029 4731 scope.go:117] "RemoveContainer" containerID="0cdb794bde11c4788a17b90d741f19fec950b03ee0ec420c5407609a5d13f53e" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.081633 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.111789 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.129588 4731 scope.go:117] "RemoveContainer" containerID="e2c774740dc744329ab80d75ac558834591cb828c7e6c2428c3e3cba84c5b028" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.133746 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:04 crc kubenswrapper[4731]: E1125 15:17:04.134221 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerName="proxy-httpd" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.134243 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerName="proxy-httpd" Nov 25 15:17:04 crc kubenswrapper[4731]: E1125 15:17:04.134262 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerName="sg-core" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.134270 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerName="sg-core" Nov 25 15:17:04 crc kubenswrapper[4731]: E1125 15:17:04.134285 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerName="ceilometer-central-agent" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.134292 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerName="ceilometer-central-agent" Nov 25 15:17:04 crc kubenswrapper[4731]: E1125 15:17:04.134340 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerName="ceilometer-notification-agent" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.134348 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerName="ceilometer-notification-agent" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.134539 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerName="sg-core" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.134569 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerName="proxy-httpd" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.134579 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerName="ceilometer-central-agent" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.134597 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" containerName="ceilometer-notification-agent" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.137487 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.140348 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.140534 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.140686 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.165537 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.183743 4731 scope.go:117] "RemoveContainer" containerID="87c17c85e853370d7d6119b822af2194adba90f7c8ba6ae239454259cc8eff16" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.268562 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-config-data\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.268662 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01072c78-f902-48e8-9edc-fe70ff7e4cd6-run-httpd\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.268688 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01072c78-f902-48e8-9edc-fe70ff7e4cd6-log-httpd\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.268709 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pwb5\" (UniqueName: \"kubernetes.io/projected/01072c78-f902-48e8-9edc-fe70ff7e4cd6-kube-api-access-4pwb5\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.268737 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.268786 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.268944 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-scripts\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.268984 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.370510 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01072c78-f902-48e8-9edc-fe70ff7e4cd6-run-httpd\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.370889 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01072c78-f902-48e8-9edc-fe70ff7e4cd6-log-httpd\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.370922 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pwb5\" (UniqueName: \"kubernetes.io/projected/01072c78-f902-48e8-9edc-fe70ff7e4cd6-kube-api-access-4pwb5\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.370961 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.371021 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.371080 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-scripts\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.371114 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.371148 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-config-data\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.376888 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-config-data\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.377156 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01072c78-f902-48e8-9edc-fe70ff7e4cd6-run-httpd\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.377359 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01072c78-f902-48e8-9edc-fe70ff7e4cd6-log-httpd\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.382403 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.386201 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.392573 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-scripts\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.396093 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.399281 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pwb5\" (UniqueName: \"kubernetes.io/projected/01072c78-f902-48e8-9edc-fe70ff7e4cd6-kube-api-access-4pwb5\") pod \"ceilometer-0\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.469019 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.821362 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14dd41a0-6d32-4e6d-903b-a212c83ba972" path="/var/lib/kubelet/pods/14dd41a0-6d32-4e6d-903b-a212c83ba972/volumes" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.822330 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="747aebcb-2e08-4d53-bdd8-bf2d94165469" path="/var/lib/kubelet/pods/747aebcb-2e08-4d53-bdd8-bf2d94165469/volumes" Nov 25 15:17:04 crc kubenswrapper[4731]: I1125 15:17:04.924872 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:17:05 crc kubenswrapper[4731]: I1125 15:17:05.058229 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01072c78-f902-48e8-9edc-fe70ff7e4cd6","Type":"ContainerStarted","Data":"69c5066ca534d700df759681d26a9cefc834bf993bcc542ace56c40d320f8995"} Nov 25 15:17:05 crc kubenswrapper[4731]: I1125 15:17:05.062058 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75b28d9f-757a-452e-8e77-d726b697a64d","Type":"ContainerStarted","Data":"724f962216be9821cb2edc46caa4511e6e038a3a018cc7b80981d258266b82be"} Nov 25 15:17:05 crc kubenswrapper[4731]: I1125 15:17:05.062114 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75b28d9f-757a-452e-8e77-d726b697a64d","Type":"ContainerStarted","Data":"fb6b75a4be02d5d693cdb56620e289d529e2767a08d0a501a266443a32b014e3"} Nov 25 15:17:05 crc kubenswrapper[4731]: I1125 15:17:05.083219 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.083199638 podStartE2EDuration="2.083199638s" podCreationTimestamp="2025-11-25 15:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:17:05.078848348 +0000 UTC m=+1383.825982788" watchObservedRunningTime="2025-11-25 15:17:05.083199638 +0000 UTC m=+1383.830334068" Nov 25 15:17:05 crc kubenswrapper[4731]: I1125 15:17:05.973143 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dzrd6" podUID="9752677e-1420-41db-a27c-96995a7553b8" containerName="registry-server" probeResult="failure" output=< Nov 25 15:17:05 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 15:17:05 crc kubenswrapper[4731]: > Nov 25 15:17:06 crc kubenswrapper[4731]: I1125 15:17:06.648504 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:17:06 crc kubenswrapper[4731]: I1125 15:17:06.710761 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-xktrd"] Nov 25 15:17:06 crc kubenswrapper[4731]: I1125 15:17:06.711058 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" podUID="6ef75da3-b43f-463c-b511-f1d832a47f0a" containerName="dnsmasq-dns" containerID="cri-o://f2d2304b88f0461fd1aa060dc3840b65cd6ff7686a998a5060ed94f67db9c6a4" gracePeriod=10 Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.081725 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01072c78-f902-48e8-9edc-fe70ff7e4cd6","Type":"ContainerStarted","Data":"116eee04434674afaa80d1b1141b9b3893e1d5365f2a388274bf125e71abb467"} Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.084000 4731 generic.go:334] "Generic (PLEG): container finished" podID="6ef75da3-b43f-463c-b511-f1d832a47f0a" containerID="f2d2304b88f0461fd1aa060dc3840b65cd6ff7686a998a5060ed94f67db9c6a4" exitCode=0 Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.084042 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" event={"ID":"6ef75da3-b43f-463c-b511-f1d832a47f0a","Type":"ContainerDied","Data":"f2d2304b88f0461fd1aa060dc3840b65cd6ff7686a998a5060ed94f67db9c6a4"} Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.264283 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" podUID="6ef75da3-b43f-463c-b511-f1d832a47f0a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.196:5353: connect: connection refused" Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.404470 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.673487 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.743519 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-ovsdbserver-nb\") pod \"6ef75da3-b43f-463c-b511-f1d832a47f0a\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.743595 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqq8x\" (UniqueName: \"kubernetes.io/projected/6ef75da3-b43f-463c-b511-f1d832a47f0a-kube-api-access-cqq8x\") pod \"6ef75da3-b43f-463c-b511-f1d832a47f0a\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.743643 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-dns-swift-storage-0\") pod \"6ef75da3-b43f-463c-b511-f1d832a47f0a\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.743767 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-ovsdbserver-sb\") pod \"6ef75da3-b43f-463c-b511-f1d832a47f0a\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.743785 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-config\") pod \"6ef75da3-b43f-463c-b511-f1d832a47f0a\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.743803 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-dns-svc\") pod \"6ef75da3-b43f-463c-b511-f1d832a47f0a\" (UID: \"6ef75da3-b43f-463c-b511-f1d832a47f0a\") " Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.755201 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ef75da3-b43f-463c-b511-f1d832a47f0a-kube-api-access-cqq8x" (OuterVolumeSpecName: "kube-api-access-cqq8x") pod "6ef75da3-b43f-463c-b511-f1d832a47f0a" (UID: "6ef75da3-b43f-463c-b511-f1d832a47f0a"). InnerVolumeSpecName "kube-api-access-cqq8x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.823546 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6ef75da3-b43f-463c-b511-f1d832a47f0a" (UID: "6ef75da3-b43f-463c-b511-f1d832a47f0a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.826446 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-config" (OuterVolumeSpecName: "config") pod "6ef75da3-b43f-463c-b511-f1d832a47f0a" (UID: "6ef75da3-b43f-463c-b511-f1d832a47f0a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.834593 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6ef75da3-b43f-463c-b511-f1d832a47f0a" (UID: "6ef75da3-b43f-463c-b511-f1d832a47f0a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.837273 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6ef75da3-b43f-463c-b511-f1d832a47f0a" (UID: "6ef75da3-b43f-463c-b511-f1d832a47f0a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.847654 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.848161 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqq8x\" (UniqueName: \"kubernetes.io/projected/6ef75da3-b43f-463c-b511-f1d832a47f0a-kube-api-access-cqq8x\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.848215 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.848229 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.848242 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.867120 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6ef75da3-b43f-463c-b511-f1d832a47f0a" (UID: "6ef75da3-b43f-463c-b511-f1d832a47f0a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:17:07 crc kubenswrapper[4731]: I1125 15:17:07.952049 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ef75da3-b43f-463c-b511-f1d832a47f0a-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.016947 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-d5dnp"] Nov 25 15:17:08 crc kubenswrapper[4731]: E1125 15:17:08.017529 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ef75da3-b43f-463c-b511-f1d832a47f0a" containerName="dnsmasq-dns" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.017565 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ef75da3-b43f-463c-b511-f1d832a47f0a" containerName="dnsmasq-dns" Nov 25 15:17:08 crc kubenswrapper[4731]: E1125 15:17:08.017605 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ef75da3-b43f-463c-b511-f1d832a47f0a" containerName="init" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.017615 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ef75da3-b43f-463c-b511-f1d832a47f0a" containerName="init" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.017961 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ef75da3-b43f-463c-b511-f1d832a47f0a" containerName="dnsmasq-dns" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.018858 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-d5dnp" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.021422 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.021967 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.030895 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-d5dnp"] Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.095853 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" event={"ID":"6ef75da3-b43f-463c-b511-f1d832a47f0a","Type":"ContainerDied","Data":"61b2ece0ca8eddc005290f19da4cdb69ede4249dbd11e1461a90aded7177590a"} Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.095903 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-xktrd" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.095912 4731 scope.go:117] "RemoveContainer" containerID="f2d2304b88f0461fd1aa060dc3840b65cd6ff7686a998a5060ed94f67db9c6a4" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.116955 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01072c78-f902-48e8-9edc-fe70ff7e4cd6","Type":"ContainerStarted","Data":"9cc31ee5b7243ca3b583960f54fc72cf8286e6bd09e22610cb6dee8052aac481"} Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.137461 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-xktrd"] Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.144429 4731 scope.go:117] "RemoveContainer" containerID="6cebcf7ec80d36ee484af21f1e270e5d046240cfb0100cabdb600abd98af82d7" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.145628 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-xktrd"] Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.155698 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-d5dnp\" (UID: \"602f5438-8315-4458-a349-d02a2d0553be\") " pod="openstack/nova-cell1-cell-mapping-d5dnp" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.156205 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-scripts\") pod \"nova-cell1-cell-mapping-d5dnp\" (UID: \"602f5438-8315-4458-a349-d02a2d0553be\") " pod="openstack/nova-cell1-cell-mapping-d5dnp" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.156243 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ck8p\" (UniqueName: \"kubernetes.io/projected/602f5438-8315-4458-a349-d02a2d0553be-kube-api-access-8ck8p\") pod \"nova-cell1-cell-mapping-d5dnp\" (UID: \"602f5438-8315-4458-a349-d02a2d0553be\") " pod="openstack/nova-cell1-cell-mapping-d5dnp" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.156324 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-config-data\") pod \"nova-cell1-cell-mapping-d5dnp\" (UID: \"602f5438-8315-4458-a349-d02a2d0553be\") " pod="openstack/nova-cell1-cell-mapping-d5dnp" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.257731 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-d5dnp\" (UID: \"602f5438-8315-4458-a349-d02a2d0553be\") " pod="openstack/nova-cell1-cell-mapping-d5dnp" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.258196 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-scripts\") pod \"nova-cell1-cell-mapping-d5dnp\" (UID: \"602f5438-8315-4458-a349-d02a2d0553be\") " pod="openstack/nova-cell1-cell-mapping-d5dnp" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.258232 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ck8p\" (UniqueName: \"kubernetes.io/projected/602f5438-8315-4458-a349-d02a2d0553be-kube-api-access-8ck8p\") pod \"nova-cell1-cell-mapping-d5dnp\" (UID: \"602f5438-8315-4458-a349-d02a2d0553be\") " pod="openstack/nova-cell1-cell-mapping-d5dnp" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.258281 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-config-data\") pod \"nova-cell1-cell-mapping-d5dnp\" (UID: \"602f5438-8315-4458-a349-d02a2d0553be\") " pod="openstack/nova-cell1-cell-mapping-d5dnp" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.263156 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-d5dnp\" (UID: \"602f5438-8315-4458-a349-d02a2d0553be\") " pod="openstack/nova-cell1-cell-mapping-d5dnp" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.263318 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-scripts\") pod \"nova-cell1-cell-mapping-d5dnp\" (UID: \"602f5438-8315-4458-a349-d02a2d0553be\") " pod="openstack/nova-cell1-cell-mapping-d5dnp" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.263891 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-config-data\") pod \"nova-cell1-cell-mapping-d5dnp\" (UID: \"602f5438-8315-4458-a349-d02a2d0553be\") " pod="openstack/nova-cell1-cell-mapping-d5dnp" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.274477 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ck8p\" (UniqueName: \"kubernetes.io/projected/602f5438-8315-4458-a349-d02a2d0553be-kube-api-access-8ck8p\") pod \"nova-cell1-cell-mapping-d5dnp\" (UID: \"602f5438-8315-4458-a349-d02a2d0553be\") " pod="openstack/nova-cell1-cell-mapping-d5dnp" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.333980 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-d5dnp" Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.771769 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-d5dnp"] Nov 25 15:17:08 crc kubenswrapper[4731]: I1125 15:17:08.833186 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ef75da3-b43f-463c-b511-f1d832a47f0a" path="/var/lib/kubelet/pods/6ef75da3-b43f-463c-b511-f1d832a47f0a/volumes" Nov 25 15:17:09 crc kubenswrapper[4731]: I1125 15:17:09.130545 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-d5dnp" event={"ID":"602f5438-8315-4458-a349-d02a2d0553be","Type":"ContainerStarted","Data":"b38d592d9dc6999251c52315270c17392dd45e89b0cd88b9a6c2ff801f67408d"} Nov 25 15:17:09 crc kubenswrapper[4731]: I1125 15:17:09.130851 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-d5dnp" event={"ID":"602f5438-8315-4458-a349-d02a2d0553be","Type":"ContainerStarted","Data":"281030ce4d04c31211308d5c22836e0a309a19053eba9aebe26996135fd0f813"} Nov 25 15:17:09 crc kubenswrapper[4731]: I1125 15:17:09.133924 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01072c78-f902-48e8-9edc-fe70ff7e4cd6","Type":"ContainerStarted","Data":"cda2b7b7808982818578d0015a4b810d320e8292e7d98cab1d376d65dd3b257d"} Nov 25 15:17:09 crc kubenswrapper[4731]: I1125 15:17:09.157481 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-d5dnp" podStartSLOduration=2.157461601 podStartE2EDuration="2.157461601s" podCreationTimestamp="2025-11-25 15:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:17:09.149386398 +0000 UTC m=+1387.896520848" watchObservedRunningTime="2025-11-25 15:17:09.157461601 +0000 UTC m=+1387.904596031" Nov 25 15:17:10 crc kubenswrapper[4731]: I1125 15:17:10.146601 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01072c78-f902-48e8-9edc-fe70ff7e4cd6","Type":"ContainerStarted","Data":"ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656"} Nov 25 15:17:10 crc kubenswrapper[4731]: I1125 15:17:10.146881 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:17:10 crc kubenswrapper[4731]: I1125 15:17:10.167282 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.806913459 podStartE2EDuration="6.167248729s" podCreationTimestamp="2025-11-25 15:17:04 +0000 UTC" firstStartedPulling="2025-11-25 15:17:04.931691411 +0000 UTC m=+1383.678825851" lastFinishedPulling="2025-11-25 15:17:09.292026691 +0000 UTC m=+1388.039161121" observedRunningTime="2025-11-25 15:17:10.16651503 +0000 UTC m=+1388.913649460" watchObservedRunningTime="2025-11-25 15:17:10.167248729 +0000 UTC m=+1388.914383169" Nov 25 15:17:13 crc kubenswrapper[4731]: I1125 15:17:13.455803 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 15:17:13 crc kubenswrapper[4731]: I1125 15:17:13.456566 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 15:17:14 crc kubenswrapper[4731]: I1125 15:17:14.472162 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="75b28d9f-757a-452e-8e77-d726b697a64d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:17:14 crc kubenswrapper[4731]: I1125 15:17:14.472165 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="75b28d9f-757a-452e-8e77-d726b697a64d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:17:15 crc kubenswrapper[4731]: I1125 15:17:15.204914 4731 generic.go:334] "Generic (PLEG): container finished" podID="602f5438-8315-4458-a349-d02a2d0553be" containerID="b38d592d9dc6999251c52315270c17392dd45e89b0cd88b9a6c2ff801f67408d" exitCode=0 Nov 25 15:17:15 crc kubenswrapper[4731]: I1125 15:17:15.204954 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-d5dnp" event={"ID":"602f5438-8315-4458-a349-d02a2d0553be","Type":"ContainerDied","Data":"b38d592d9dc6999251c52315270c17392dd45e89b0cd88b9a6c2ff801f67408d"} Nov 25 15:17:15 crc kubenswrapper[4731]: I1125 15:17:15.972777 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dzrd6" podUID="9752677e-1420-41db-a27c-96995a7553b8" containerName="registry-server" probeResult="failure" output=< Nov 25 15:17:15 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 15:17:15 crc kubenswrapper[4731]: > Nov 25 15:17:16 crc kubenswrapper[4731]: I1125 15:17:16.602210 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-d5dnp" Nov 25 15:17:16 crc kubenswrapper[4731]: I1125 15:17:16.728623 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-scripts\") pod \"602f5438-8315-4458-a349-d02a2d0553be\" (UID: \"602f5438-8315-4458-a349-d02a2d0553be\") " Nov 25 15:17:16 crc kubenswrapper[4731]: I1125 15:17:16.728704 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-combined-ca-bundle\") pod \"602f5438-8315-4458-a349-d02a2d0553be\" (UID: \"602f5438-8315-4458-a349-d02a2d0553be\") " Nov 25 15:17:16 crc kubenswrapper[4731]: I1125 15:17:16.728972 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-config-data\") pod \"602f5438-8315-4458-a349-d02a2d0553be\" (UID: \"602f5438-8315-4458-a349-d02a2d0553be\") " Nov 25 15:17:16 crc kubenswrapper[4731]: I1125 15:17:16.729010 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ck8p\" (UniqueName: \"kubernetes.io/projected/602f5438-8315-4458-a349-d02a2d0553be-kube-api-access-8ck8p\") pod \"602f5438-8315-4458-a349-d02a2d0553be\" (UID: \"602f5438-8315-4458-a349-d02a2d0553be\") " Nov 25 15:17:16 crc kubenswrapper[4731]: I1125 15:17:16.749014 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-scripts" (OuterVolumeSpecName: "scripts") pod "602f5438-8315-4458-a349-d02a2d0553be" (UID: "602f5438-8315-4458-a349-d02a2d0553be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:16 crc kubenswrapper[4731]: I1125 15:17:16.753500 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602f5438-8315-4458-a349-d02a2d0553be-kube-api-access-8ck8p" (OuterVolumeSpecName: "kube-api-access-8ck8p") pod "602f5438-8315-4458-a349-d02a2d0553be" (UID: "602f5438-8315-4458-a349-d02a2d0553be"). InnerVolumeSpecName "kube-api-access-8ck8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:16 crc kubenswrapper[4731]: I1125 15:17:16.760497 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-config-data" (OuterVolumeSpecName: "config-data") pod "602f5438-8315-4458-a349-d02a2d0553be" (UID: "602f5438-8315-4458-a349-d02a2d0553be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:16 crc kubenswrapper[4731]: I1125 15:17:16.787952 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "602f5438-8315-4458-a349-d02a2d0553be" (UID: "602f5438-8315-4458-a349-d02a2d0553be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:16 crc kubenswrapper[4731]: I1125 15:17:16.831802 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:16 crc kubenswrapper[4731]: I1125 15:17:16.831849 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ck8p\" (UniqueName: \"kubernetes.io/projected/602f5438-8315-4458-a349-d02a2d0553be-kube-api-access-8ck8p\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:16 crc kubenswrapper[4731]: I1125 15:17:16.831860 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:16 crc kubenswrapper[4731]: I1125 15:17:16.831868 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602f5438-8315-4458-a349-d02a2d0553be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:17 crc kubenswrapper[4731]: I1125 15:17:17.226020 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-d5dnp" event={"ID":"602f5438-8315-4458-a349-d02a2d0553be","Type":"ContainerDied","Data":"281030ce4d04c31211308d5c22836e0a309a19053eba9aebe26996135fd0f813"} Nov 25 15:17:17 crc kubenswrapper[4731]: I1125 15:17:17.226085 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="281030ce4d04c31211308d5c22836e0a309a19053eba9aebe26996135fd0f813" Nov 25 15:17:17 crc kubenswrapper[4731]: I1125 15:17:17.226084 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-d5dnp" Nov 25 15:17:17 crc kubenswrapper[4731]: I1125 15:17:17.509357 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:17:17 crc kubenswrapper[4731]: I1125 15:17:17.510131 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="cc7eee5f-3ccc-4620-bc52-97e768a361b3" containerName="nova-scheduler-scheduler" containerID="cri-o://18352c724b57af56d75f5a641652a7a352256172f974b9630cad20974cb34279" gracePeriod=30 Nov 25 15:17:17 crc kubenswrapper[4731]: I1125 15:17:17.523603 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:17:17 crc kubenswrapper[4731]: I1125 15:17:17.523898 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="75b28d9f-757a-452e-8e77-d726b697a64d" containerName="nova-api-log" containerID="cri-o://fb6b75a4be02d5d693cdb56620e289d529e2767a08d0a501a266443a32b014e3" gracePeriod=30 Nov 25 15:17:17 crc kubenswrapper[4731]: I1125 15:17:17.524044 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="75b28d9f-757a-452e-8e77-d726b697a64d" containerName="nova-api-api" containerID="cri-o://724f962216be9821cb2edc46caa4511e6e038a3a018cc7b80981d258266b82be" gracePeriod=30 Nov 25 15:17:17 crc kubenswrapper[4731]: I1125 15:17:17.607319 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:17:17 crc kubenswrapper[4731]: I1125 15:17:17.607878 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3bfe044d-13a7-40d5-b612-b95d5d1e39ea" containerName="nova-metadata-log" containerID="cri-o://f272995c2df8210d0df3e9972775067ab6307b04d5ca6511ce3a35771b29f150" gracePeriod=30 Nov 25 15:17:17 crc kubenswrapper[4731]: I1125 15:17:17.607915 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3bfe044d-13a7-40d5-b612-b95d5d1e39ea" containerName="nova-metadata-metadata" containerID="cri-o://6c3cb3b9773d85266db990098c6666aa14be4f987b39067202fc96f0b187e450" gracePeriod=30 Nov 25 15:17:18 crc kubenswrapper[4731]: I1125 15:17:18.279102 4731 generic.go:334] "Generic (PLEG): container finished" podID="3bfe044d-13a7-40d5-b612-b95d5d1e39ea" containerID="f272995c2df8210d0df3e9972775067ab6307b04d5ca6511ce3a35771b29f150" exitCode=143 Nov 25 15:17:18 crc kubenswrapper[4731]: I1125 15:17:18.279187 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3bfe044d-13a7-40d5-b612-b95d5d1e39ea","Type":"ContainerDied","Data":"f272995c2df8210d0df3e9972775067ab6307b04d5ca6511ce3a35771b29f150"} Nov 25 15:17:18 crc kubenswrapper[4731]: I1125 15:17:18.281838 4731 generic.go:334] "Generic (PLEG): container finished" podID="75b28d9f-757a-452e-8e77-d726b697a64d" containerID="fb6b75a4be02d5d693cdb56620e289d529e2767a08d0a501a266443a32b014e3" exitCode=143 Nov 25 15:17:18 crc kubenswrapper[4731]: I1125 15:17:18.281841 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75b28d9f-757a-452e-8e77-d726b697a64d","Type":"ContainerDied","Data":"fb6b75a4be02d5d693cdb56620e289d529e2767a08d0a501a266443a32b014e3"} Nov 25 15:17:19 crc kubenswrapper[4731]: E1125 15:17:19.976483 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="18352c724b57af56d75f5a641652a7a352256172f974b9630cad20974cb34279" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 15:17:19 crc kubenswrapper[4731]: E1125 15:17:19.979349 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="18352c724b57af56d75f5a641652a7a352256172f974b9630cad20974cb34279" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 15:17:19 crc kubenswrapper[4731]: E1125 15:17:19.980768 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="18352c724b57af56d75f5a641652a7a352256172f974b9630cad20974cb34279" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Nov 25 15:17:19 crc kubenswrapper[4731]: E1125 15:17:19.980839 4731 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="cc7eee5f-3ccc-4620-bc52-97e768a361b3" containerName="nova-scheduler-scheduler" Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.040452 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.040516 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.040560 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.041393 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf03c5dc783d5b39c33c89b81293bca28d3310abcd7786c9974eab3a74d91a80"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.041451 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://bf03c5dc783d5b39c33c89b81293bca28d3310abcd7786c9974eab3a74d91a80" gracePeriod=600 Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.299686 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="bf03c5dc783d5b39c33c89b81293bca28d3310abcd7786c9974eab3a74d91a80" exitCode=0 Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.299755 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"bf03c5dc783d5b39c33c89b81293bca28d3310abcd7786c9974eab3a74d91a80"} Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.300026 4731 scope.go:117] "RemoveContainer" containerID="f9bc0406706a4644ebe665a4db06c99d1f92fdd1f784b0fafdbe78a287d513a2" Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.768844 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.830436 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7eee5f-3ccc-4620-bc52-97e768a361b3-combined-ca-bundle\") pod \"cc7eee5f-3ccc-4620-bc52-97e768a361b3\" (UID: \"cc7eee5f-3ccc-4620-bc52-97e768a361b3\") " Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.830502 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc7eee5f-3ccc-4620-bc52-97e768a361b3-config-data\") pod \"cc7eee5f-3ccc-4620-bc52-97e768a361b3\" (UID: \"cc7eee5f-3ccc-4620-bc52-97e768a361b3\") " Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.830698 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94pnn\" (UniqueName: \"kubernetes.io/projected/cc7eee5f-3ccc-4620-bc52-97e768a361b3-kube-api-access-94pnn\") pod \"cc7eee5f-3ccc-4620-bc52-97e768a361b3\" (UID: \"cc7eee5f-3ccc-4620-bc52-97e768a361b3\") " Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.841141 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc7eee5f-3ccc-4620-bc52-97e768a361b3-kube-api-access-94pnn" (OuterVolumeSpecName: "kube-api-access-94pnn") pod "cc7eee5f-3ccc-4620-bc52-97e768a361b3" (UID: "cc7eee5f-3ccc-4620-bc52-97e768a361b3"). InnerVolumeSpecName "kube-api-access-94pnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.871281 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc7eee5f-3ccc-4620-bc52-97e768a361b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc7eee5f-3ccc-4620-bc52-97e768a361b3" (UID: "cc7eee5f-3ccc-4620-bc52-97e768a361b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.882684 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3bfe044d-13a7-40d5-b612-b95d5d1e39ea" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": dial tcp 10.217.0.199:8775: connect: connection refused" Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.882705 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3bfe044d-13a7-40d5-b612-b95d5d1e39ea" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.199:8775/\": dial tcp 10.217.0.199:8775: connect: connection refused" Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.883793 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc7eee5f-3ccc-4620-bc52-97e768a361b3-config-data" (OuterVolumeSpecName: "config-data") pod "cc7eee5f-3ccc-4620-bc52-97e768a361b3" (UID: "cc7eee5f-3ccc-4620-bc52-97e768a361b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.934393 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94pnn\" (UniqueName: \"kubernetes.io/projected/cc7eee5f-3ccc-4620-bc52-97e768a361b3-kube-api-access-94pnn\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.934441 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc7eee5f-3ccc-4620-bc52-97e768a361b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:20 crc kubenswrapper[4731]: I1125 15:17:20.934465 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc7eee5f-3ccc-4620-bc52-97e768a361b3-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.084204 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.137286 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4h76\" (UniqueName: \"kubernetes.io/projected/75b28d9f-757a-452e-8e77-d726b697a64d-kube-api-access-h4h76\") pod \"75b28d9f-757a-452e-8e77-d726b697a64d\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.137621 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-combined-ca-bundle\") pod \"75b28d9f-757a-452e-8e77-d726b697a64d\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.137687 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-public-tls-certs\") pod \"75b28d9f-757a-452e-8e77-d726b697a64d\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.137718 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b28d9f-757a-452e-8e77-d726b697a64d-logs\") pod \"75b28d9f-757a-452e-8e77-d726b697a64d\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.137834 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-config-data\") pod \"75b28d9f-757a-452e-8e77-d726b697a64d\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.137856 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-internal-tls-certs\") pod \"75b28d9f-757a-452e-8e77-d726b697a64d\" (UID: \"75b28d9f-757a-452e-8e77-d726b697a64d\") " Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.138896 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75b28d9f-757a-452e-8e77-d726b697a64d-logs" (OuterVolumeSpecName: "logs") pod "75b28d9f-757a-452e-8e77-d726b697a64d" (UID: "75b28d9f-757a-452e-8e77-d726b697a64d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.167276 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b28d9f-757a-452e-8e77-d726b697a64d-kube-api-access-h4h76" (OuterVolumeSpecName: "kube-api-access-h4h76") pod "75b28d9f-757a-452e-8e77-d726b697a64d" (UID: "75b28d9f-757a-452e-8e77-d726b697a64d"). InnerVolumeSpecName "kube-api-access-h4h76". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.194025 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "75b28d9f-757a-452e-8e77-d726b697a64d" (UID: "75b28d9f-757a-452e-8e77-d726b697a64d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.206109 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75b28d9f-757a-452e-8e77-d726b697a64d" (UID: "75b28d9f-757a-452e-8e77-d726b697a64d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.211096 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-config-data" (OuterVolumeSpecName: "config-data") pod "75b28d9f-757a-452e-8e77-d726b697a64d" (UID: "75b28d9f-757a-452e-8e77-d726b697a64d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.216047 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "75b28d9f-757a-452e-8e77-d726b697a64d" (UID: "75b28d9f-757a-452e-8e77-d726b697a64d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.249497 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/75b28d9f-757a-452e-8e77-d726b697a64d-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.249533 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.249543 4731 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.249555 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4h76\" (UniqueName: \"kubernetes.io/projected/75b28d9f-757a-452e-8e77-d726b697a64d-kube-api-access-h4h76\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.249565 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.249577 4731 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/75b28d9f-757a-452e-8e77-d726b697a64d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.314591 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8"} Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.317572 4731 generic.go:334] "Generic (PLEG): container finished" podID="3bfe044d-13a7-40d5-b612-b95d5d1e39ea" containerID="6c3cb3b9773d85266db990098c6666aa14be4f987b39067202fc96f0b187e450" exitCode=0 Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.317636 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3bfe044d-13a7-40d5-b612-b95d5d1e39ea","Type":"ContainerDied","Data":"6c3cb3b9773d85266db990098c6666aa14be4f987b39067202fc96f0b187e450"} Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.317662 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3bfe044d-13a7-40d5-b612-b95d5d1e39ea","Type":"ContainerDied","Data":"44ed8e8534c56d2f314d2e19c33ac98281aa8355676e64d0307376c114490825"} Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.317676 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44ed8e8534c56d2f314d2e19c33ac98281aa8355676e64d0307376c114490825" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.319620 4731 generic.go:334] "Generic (PLEG): container finished" podID="75b28d9f-757a-452e-8e77-d726b697a64d" containerID="724f962216be9821cb2edc46caa4511e6e038a3a018cc7b80981d258266b82be" exitCode=0 Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.319675 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75b28d9f-757a-452e-8e77-d726b697a64d","Type":"ContainerDied","Data":"724f962216be9821cb2edc46caa4511e6e038a3a018cc7b80981d258266b82be"} Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.319698 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"75b28d9f-757a-452e-8e77-d726b697a64d","Type":"ContainerDied","Data":"e01cf396e1e9183360e2c13fc5a05e903b5d90308726abbaeaac4abd77af7390"} Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.319719 4731 scope.go:117] "RemoveContainer" containerID="724f962216be9821cb2edc46caa4511e6e038a3a018cc7b80981d258266b82be" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.319888 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.323529 4731 generic.go:334] "Generic (PLEG): container finished" podID="cc7eee5f-3ccc-4620-bc52-97e768a361b3" containerID="18352c724b57af56d75f5a641652a7a352256172f974b9630cad20974cb34279" exitCode=0 Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.323574 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cc7eee5f-3ccc-4620-bc52-97e768a361b3","Type":"ContainerDied","Data":"18352c724b57af56d75f5a641652a7a352256172f974b9630cad20974cb34279"} Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.323654 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"cc7eee5f-3ccc-4620-bc52-97e768a361b3","Type":"ContainerDied","Data":"37ed2c6ccd14e781471c53f8771e38d8da7fd1b52ceb4051154560c3d783a390"} Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.323587 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.360062 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.389163 4731 scope.go:117] "RemoveContainer" containerID="fb6b75a4be02d5d693cdb56620e289d529e2767a08d0a501a266443a32b014e3" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.398459 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.417912 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.431462 4731 scope.go:117] "RemoveContainer" containerID="724f962216be9821cb2edc46caa4511e6e038a3a018cc7b80981d258266b82be" Nov 25 15:17:21 crc kubenswrapper[4731]: E1125 15:17:21.433957 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"724f962216be9821cb2edc46caa4511e6e038a3a018cc7b80981d258266b82be\": container with ID starting with 724f962216be9821cb2edc46caa4511e6e038a3a018cc7b80981d258266b82be not found: ID does not exist" containerID="724f962216be9821cb2edc46caa4511e6e038a3a018cc7b80981d258266b82be" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.434118 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"724f962216be9821cb2edc46caa4511e6e038a3a018cc7b80981d258266b82be"} err="failed to get container status \"724f962216be9821cb2edc46caa4511e6e038a3a018cc7b80981d258266b82be\": rpc error: code = NotFound desc = could not find container \"724f962216be9821cb2edc46caa4511e6e038a3a018cc7b80981d258266b82be\": container with ID starting with 724f962216be9821cb2edc46caa4511e6e038a3a018cc7b80981d258266b82be not found: ID does not exist" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.434222 4731 scope.go:117] "RemoveContainer" containerID="fb6b75a4be02d5d693cdb56620e289d529e2767a08d0a501a266443a32b014e3" Nov 25 15:17:21 crc kubenswrapper[4731]: E1125 15:17:21.434973 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb6b75a4be02d5d693cdb56620e289d529e2767a08d0a501a266443a32b014e3\": container with ID starting with fb6b75a4be02d5d693cdb56620e289d529e2767a08d0a501a266443a32b014e3 not found: ID does not exist" containerID="fb6b75a4be02d5d693cdb56620e289d529e2767a08d0a501a266443a32b014e3" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.435034 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb6b75a4be02d5d693cdb56620e289d529e2767a08d0a501a266443a32b014e3"} err="failed to get container status \"fb6b75a4be02d5d693cdb56620e289d529e2767a08d0a501a266443a32b014e3\": rpc error: code = NotFound desc = could not find container \"fb6b75a4be02d5d693cdb56620e289d529e2767a08d0a501a266443a32b014e3\": container with ID starting with fb6b75a4be02d5d693cdb56620e289d529e2767a08d0a501a266443a32b014e3 not found: ID does not exist" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.435067 4731 scope.go:117] "RemoveContainer" containerID="18352c724b57af56d75f5a641652a7a352256172f974b9630cad20974cb34279" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.440028 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.449158 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.452380 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-combined-ca-bundle\") pod \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.452492 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-logs\") pod \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.454558 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-logs" (OuterVolumeSpecName: "logs") pod "3bfe044d-13a7-40d5-b612-b95d5d1e39ea" (UID: "3bfe044d-13a7-40d5-b612-b95d5d1e39ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.456903 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95dfw\" (UniqueName: \"kubernetes.io/projected/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-kube-api-access-95dfw\") pod \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.456977 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-nova-metadata-tls-certs\") pod \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.457029 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-config-data\") pod \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\" (UID: \"3bfe044d-13a7-40d5-b612-b95d5d1e39ea\") " Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.457750 4731 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-logs\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.459572 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Nov 25 15:17:21 crc kubenswrapper[4731]: E1125 15:17:21.460199 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc7eee5f-3ccc-4620-bc52-97e768a361b3" containerName="nova-scheduler-scheduler" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.460223 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc7eee5f-3ccc-4620-bc52-97e768a361b3" containerName="nova-scheduler-scheduler" Nov 25 15:17:21 crc kubenswrapper[4731]: E1125 15:17:21.460241 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b28d9f-757a-452e-8e77-d726b697a64d" containerName="nova-api-api" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.460249 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b28d9f-757a-452e-8e77-d726b697a64d" containerName="nova-api-api" Nov 25 15:17:21 crc kubenswrapper[4731]: E1125 15:17:21.460264 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bfe044d-13a7-40d5-b612-b95d5d1e39ea" containerName="nova-metadata-log" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.460272 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bfe044d-13a7-40d5-b612-b95d5d1e39ea" containerName="nova-metadata-log" Nov 25 15:17:21 crc kubenswrapper[4731]: E1125 15:17:21.460288 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bfe044d-13a7-40d5-b612-b95d5d1e39ea" containerName="nova-metadata-metadata" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.460298 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bfe044d-13a7-40d5-b612-b95d5d1e39ea" containerName="nova-metadata-metadata" Nov 25 15:17:21 crc kubenswrapper[4731]: E1125 15:17:21.460324 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="602f5438-8315-4458-a349-d02a2d0553be" containerName="nova-manage" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.460332 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="602f5438-8315-4458-a349-d02a2d0553be" containerName="nova-manage" Nov 25 15:17:21 crc kubenswrapper[4731]: E1125 15:17:21.460348 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b28d9f-757a-452e-8e77-d726b697a64d" containerName="nova-api-log" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.460355 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b28d9f-757a-452e-8e77-d726b697a64d" containerName="nova-api-log" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.460914 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc7eee5f-3ccc-4620-bc52-97e768a361b3" containerName="nova-scheduler-scheduler" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.460935 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="602f5438-8315-4458-a349-d02a2d0553be" containerName="nova-manage" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.460985 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b28d9f-757a-452e-8e77-d726b697a64d" containerName="nova-api-api" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.461009 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bfe044d-13a7-40d5-b612-b95d5d1e39ea" containerName="nova-metadata-log" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.461023 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b28d9f-757a-452e-8e77-d726b697a64d" containerName="nova-api-log" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.461032 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bfe044d-13a7-40d5-b612-b95d5d1e39ea" containerName="nova-metadata-metadata" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.462530 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.464858 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.465128 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.465339 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.468046 4731 scope.go:117] "RemoveContainer" containerID="18352c724b57af56d75f5a641652a7a352256172f974b9630cad20974cb34279" Nov 25 15:17:21 crc kubenswrapper[4731]: E1125 15:17:21.473381 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18352c724b57af56d75f5a641652a7a352256172f974b9630cad20974cb34279\": container with ID starting with 18352c724b57af56d75f5a641652a7a352256172f974b9630cad20974cb34279 not found: ID does not exist" containerID="18352c724b57af56d75f5a641652a7a352256172f974b9630cad20974cb34279" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.473429 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18352c724b57af56d75f5a641652a7a352256172f974b9630cad20974cb34279"} err="failed to get container status \"18352c724b57af56d75f5a641652a7a352256172f974b9630cad20974cb34279\": rpc error: code = NotFound desc = could not find container \"18352c724b57af56d75f5a641652a7a352256172f974b9630cad20974cb34279\": container with ID starting with 18352c724b57af56d75f5a641652a7a352256172f974b9630cad20974cb34279 not found: ID does not exist" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.478271 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-kube-api-access-95dfw" (OuterVolumeSpecName: "kube-api-access-95dfw") pod "3bfe044d-13a7-40d5-b612-b95d5d1e39ea" (UID: "3bfe044d-13a7-40d5-b612-b95d5d1e39ea"). InnerVolumeSpecName "kube-api-access-95dfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.478511 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.495185 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.498342 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-config-data" (OuterVolumeSpecName: "config-data") pod "3bfe044d-13a7-40d5-b612-b95d5d1e39ea" (UID: "3bfe044d-13a7-40d5-b612-b95d5d1e39ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.502945 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.505782 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.506360 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.525116 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bfe044d-13a7-40d5-b612-b95d5d1e39ea" (UID: "3bfe044d-13a7-40d5-b612-b95d5d1e39ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.545986 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3bfe044d-13a7-40d5-b612-b95d5d1e39ea" (UID: "3bfe044d-13a7-40d5-b612-b95d5d1e39ea"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.559093 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.559138 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-logs\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.559197 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-config-data\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.559239 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9f4z\" (UniqueName: \"kubernetes.io/projected/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-kube-api-access-b9f4z\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.559288 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-public-tls-certs\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.559308 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.559365 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.559378 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95dfw\" (UniqueName: \"kubernetes.io/projected/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-kube-api-access-95dfw\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.559392 4731 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.559400 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bfe044d-13a7-40d5-b612-b95d5d1e39ea-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.661297 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzjrh\" (UniqueName: \"kubernetes.io/projected/250bffb8-9700-49f5-a1a2-39577b5bcba3-kube-api-access-dzjrh\") pod \"nova-scheduler-0\" (UID: \"250bffb8-9700-49f5-a1a2-39577b5bcba3\") " pod="openstack/nova-scheduler-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.661370 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-config-data\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.661415 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9f4z\" (UniqueName: \"kubernetes.io/projected/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-kube-api-access-b9f4z\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.661452 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250bffb8-9700-49f5-a1a2-39577b5bcba3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"250bffb8-9700-49f5-a1a2-39577b5bcba3\") " pod="openstack/nova-scheduler-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.661481 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-public-tls-certs\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.661500 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.661520 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/250bffb8-9700-49f5-a1a2-39577b5bcba3-config-data\") pod \"nova-scheduler-0\" (UID: \"250bffb8-9700-49f5-a1a2-39577b5bcba3\") " pod="openstack/nova-scheduler-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.661569 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.661589 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-logs\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.661995 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-logs\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.665268 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.668241 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-config-data\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.669574 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-internal-tls-certs\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.671348 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-public-tls-certs\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.683206 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9f4z\" (UniqueName: \"kubernetes.io/projected/b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f-kube-api-access-b9f4z\") pod \"nova-api-0\" (UID: \"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f\") " pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.763100 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzjrh\" (UniqueName: \"kubernetes.io/projected/250bffb8-9700-49f5-a1a2-39577b5bcba3-kube-api-access-dzjrh\") pod \"nova-scheduler-0\" (UID: \"250bffb8-9700-49f5-a1a2-39577b5bcba3\") " pod="openstack/nova-scheduler-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.763224 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250bffb8-9700-49f5-a1a2-39577b5bcba3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"250bffb8-9700-49f5-a1a2-39577b5bcba3\") " pod="openstack/nova-scheduler-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.763285 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/250bffb8-9700-49f5-a1a2-39577b5bcba3-config-data\") pod \"nova-scheduler-0\" (UID: \"250bffb8-9700-49f5-a1a2-39577b5bcba3\") " pod="openstack/nova-scheduler-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.770146 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/250bffb8-9700-49f5-a1a2-39577b5bcba3-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"250bffb8-9700-49f5-a1a2-39577b5bcba3\") " pod="openstack/nova-scheduler-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.771972 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/250bffb8-9700-49f5-a1a2-39577b5bcba3-config-data\") pod \"nova-scheduler-0\" (UID: \"250bffb8-9700-49f5-a1a2-39577b5bcba3\") " pod="openstack/nova-scheduler-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.787097 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzjrh\" (UniqueName: \"kubernetes.io/projected/250bffb8-9700-49f5-a1a2-39577b5bcba3-kube-api-access-dzjrh\") pod \"nova-scheduler-0\" (UID: \"250bffb8-9700-49f5-a1a2-39577b5bcba3\") " pod="openstack/nova-scheduler-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.835649 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Nov 25 15:17:21 crc kubenswrapper[4731]: I1125 15:17:21.842925 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.333925 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Nov 25 15:17:22 crc kubenswrapper[4731]: W1125 15:17:22.337266 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod250bffb8_9700_49f5_a1a2_39577b5bcba3.slice/crio-f7c28e8ba9f267c3cd9cdbc03ebf48f720fc2b673d0d5e323b5f792d3c084998 WatchSource:0}: Error finding container f7c28e8ba9f267c3cd9cdbc03ebf48f720fc2b673d0d5e323b5f792d3c084998: Status 404 returned error can't find the container with id f7c28e8ba9f267c3cd9cdbc03ebf48f720fc2b673d0d5e323b5f792d3c084998 Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.349728 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.410333 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.529979 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.541674 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.554425 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.558585 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.561881 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.562242 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.571366 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.701785 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/209fb0e3-5c2d-4e1d-b735-432bc80aa947-logs\") pod \"nova-metadata-0\" (UID: \"209fb0e3-5c2d-4e1d-b735-432bc80aa947\") " pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.701915 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/209fb0e3-5c2d-4e1d-b735-432bc80aa947-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"209fb0e3-5c2d-4e1d-b735-432bc80aa947\") " pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.701995 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq9hj\" (UniqueName: \"kubernetes.io/projected/209fb0e3-5c2d-4e1d-b735-432bc80aa947-kube-api-access-rq9hj\") pod \"nova-metadata-0\" (UID: \"209fb0e3-5c2d-4e1d-b735-432bc80aa947\") " pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.702029 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/209fb0e3-5c2d-4e1d-b735-432bc80aa947-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"209fb0e3-5c2d-4e1d-b735-432bc80aa947\") " pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.702054 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/209fb0e3-5c2d-4e1d-b735-432bc80aa947-config-data\") pod \"nova-metadata-0\" (UID: \"209fb0e3-5c2d-4e1d-b735-432bc80aa947\") " pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.804137 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/209fb0e3-5c2d-4e1d-b735-432bc80aa947-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"209fb0e3-5c2d-4e1d-b735-432bc80aa947\") " pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.804171 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq9hj\" (UniqueName: \"kubernetes.io/projected/209fb0e3-5c2d-4e1d-b735-432bc80aa947-kube-api-access-rq9hj\") pod \"nova-metadata-0\" (UID: \"209fb0e3-5c2d-4e1d-b735-432bc80aa947\") " pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.804190 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/209fb0e3-5c2d-4e1d-b735-432bc80aa947-config-data\") pod \"nova-metadata-0\" (UID: \"209fb0e3-5c2d-4e1d-b735-432bc80aa947\") " pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.804305 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/209fb0e3-5c2d-4e1d-b735-432bc80aa947-logs\") pod \"nova-metadata-0\" (UID: \"209fb0e3-5c2d-4e1d-b735-432bc80aa947\") " pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.804363 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/209fb0e3-5c2d-4e1d-b735-432bc80aa947-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"209fb0e3-5c2d-4e1d-b735-432bc80aa947\") " pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.804769 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/209fb0e3-5c2d-4e1d-b735-432bc80aa947-logs\") pod \"nova-metadata-0\" (UID: \"209fb0e3-5c2d-4e1d-b735-432bc80aa947\") " pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.807810 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/209fb0e3-5c2d-4e1d-b735-432bc80aa947-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"209fb0e3-5c2d-4e1d-b735-432bc80aa947\") " pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.808180 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/209fb0e3-5c2d-4e1d-b735-432bc80aa947-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"209fb0e3-5c2d-4e1d-b735-432bc80aa947\") " pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.808363 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/209fb0e3-5c2d-4e1d-b735-432bc80aa947-config-data\") pod \"nova-metadata-0\" (UID: \"209fb0e3-5c2d-4e1d-b735-432bc80aa947\") " pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.825125 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bfe044d-13a7-40d5-b612-b95d5d1e39ea" path="/var/lib/kubelet/pods/3bfe044d-13a7-40d5-b612-b95d5d1e39ea/volumes" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.825693 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq9hj\" (UniqueName: \"kubernetes.io/projected/209fb0e3-5c2d-4e1d-b735-432bc80aa947-kube-api-access-rq9hj\") pod \"nova-metadata-0\" (UID: \"209fb0e3-5c2d-4e1d-b735-432bc80aa947\") " pod="openstack/nova-metadata-0" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.827191 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75b28d9f-757a-452e-8e77-d726b697a64d" path="/var/lib/kubelet/pods/75b28d9f-757a-452e-8e77-d726b697a64d/volumes" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.827953 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc7eee5f-3ccc-4620-bc52-97e768a361b3" path="/var/lib/kubelet/pods/cc7eee5f-3ccc-4620-bc52-97e768a361b3/volumes" Nov 25 15:17:22 crc kubenswrapper[4731]: I1125 15:17:22.902115 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Nov 25 15:17:23 crc kubenswrapper[4731]: I1125 15:17:23.360895 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"250bffb8-9700-49f5-a1a2-39577b5bcba3","Type":"ContainerStarted","Data":"08c51eeddbfaa3281f3c7802cdc2b5c94e46f7a8a01cff15c979609bd1393ad4"} Nov 25 15:17:23 crc kubenswrapper[4731]: I1125 15:17:23.361294 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"250bffb8-9700-49f5-a1a2-39577b5bcba3","Type":"ContainerStarted","Data":"f7c28e8ba9f267c3cd9cdbc03ebf48f720fc2b673d0d5e323b5f792d3c084998"} Nov 25 15:17:23 crc kubenswrapper[4731]: I1125 15:17:23.371244 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f","Type":"ContainerStarted","Data":"48d12523dd7dea70bd41bb73adf6c038adf14be81ecc13adc79a8b92ad890cc0"} Nov 25 15:17:23 crc kubenswrapper[4731]: I1125 15:17:23.371682 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f","Type":"ContainerStarted","Data":"9113428321b329eb1d79b865114bcf6b08d0c617f02fac1666ac71f5b7a4f2b7"} Nov 25 15:17:23 crc kubenswrapper[4731]: I1125 15:17:23.371699 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f","Type":"ContainerStarted","Data":"09da3e4e821ac78ed8de976f8497425029bb4897de0e7a87c6f2aa0697d445f0"} Nov 25 15:17:23 crc kubenswrapper[4731]: I1125 15:17:23.380327 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Nov 25 15:17:23 crc kubenswrapper[4731]: W1125 15:17:23.381102 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod209fb0e3_5c2d_4e1d_b735_432bc80aa947.slice/crio-6e9c71d7df0e67b946311c2764da4d4f24e6fc3915b3aba6af6cc65117cd57ec WatchSource:0}: Error finding container 6e9c71d7df0e67b946311c2764da4d4f24e6fc3915b3aba6af6cc65117cd57ec: Status 404 returned error can't find the container with id 6e9c71d7df0e67b946311c2764da4d4f24e6fc3915b3aba6af6cc65117cd57ec Nov 25 15:17:23 crc kubenswrapper[4731]: I1125 15:17:23.406264 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.40624013 podStartE2EDuration="2.40624013s" podCreationTimestamp="2025-11-25 15:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:17:23.394387701 +0000 UTC m=+1402.141522161" watchObservedRunningTime="2025-11-25 15:17:23.40624013 +0000 UTC m=+1402.153374560" Nov 25 15:17:23 crc kubenswrapper[4731]: I1125 15:17:23.424898 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.424868999 podStartE2EDuration="2.424868999s" podCreationTimestamp="2025-11-25 15:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:17:23.423138315 +0000 UTC m=+1402.170272745" watchObservedRunningTime="2025-11-25 15:17:23.424868999 +0000 UTC m=+1402.172003479" Nov 25 15:17:24 crc kubenswrapper[4731]: I1125 15:17:24.381154 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"209fb0e3-5c2d-4e1d-b735-432bc80aa947","Type":"ContainerStarted","Data":"65acdae35868af0db1f7b5247723ed1f2f84f25edce9c27cc1a95d840b2c3518"} Nov 25 15:17:24 crc kubenswrapper[4731]: I1125 15:17:24.381492 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"209fb0e3-5c2d-4e1d-b735-432bc80aa947","Type":"ContainerStarted","Data":"39654567e85b03d7861d894f88d536c7afbe7605e70716bd9004599940cf58eb"} Nov 25 15:17:24 crc kubenswrapper[4731]: I1125 15:17:24.381507 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"209fb0e3-5c2d-4e1d-b735-432bc80aa947","Type":"ContainerStarted","Data":"6e9c71d7df0e67b946311c2764da4d4f24e6fc3915b3aba6af6cc65117cd57ec"} Nov 25 15:17:24 crc kubenswrapper[4731]: I1125 15:17:24.410188 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.41016746 podStartE2EDuration="2.41016746s" podCreationTimestamp="2025-11-25 15:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:17:24.403792179 +0000 UTC m=+1403.150926609" watchObservedRunningTime="2025-11-25 15:17:24.41016746 +0000 UTC m=+1403.157301900" Nov 25 15:17:24 crc kubenswrapper[4731]: I1125 15:17:24.955789 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:17:25 crc kubenswrapper[4731]: I1125 15:17:25.015808 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:17:26 crc kubenswrapper[4731]: I1125 15:17:26.780259 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dzrd6"] Nov 25 15:17:26 crc kubenswrapper[4731]: I1125 15:17:26.781123 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dzrd6" podUID="9752677e-1420-41db-a27c-96995a7553b8" containerName="registry-server" containerID="cri-o://f6353d8ba163bf214b5d0715bac4569d51ee99df32797590988938d0ce3c72bd" gracePeriod=2 Nov 25 15:17:26 crc kubenswrapper[4731]: I1125 15:17:26.843376 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.248988 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.415353 4731 generic.go:334] "Generic (PLEG): container finished" podID="9752677e-1420-41db-a27c-96995a7553b8" containerID="f6353d8ba163bf214b5d0715bac4569d51ee99df32797590988938d0ce3c72bd" exitCode=0 Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.415775 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzrd6" event={"ID":"9752677e-1420-41db-a27c-96995a7553b8","Type":"ContainerDied","Data":"f6353d8ba163bf214b5d0715bac4569d51ee99df32797590988938d0ce3c72bd"} Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.416101 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dzrd6" event={"ID":"9752677e-1420-41db-a27c-96995a7553b8","Type":"ContainerDied","Data":"e9b653e05b2ab0a8bfdb080d76ce9c69463de50f5524fe31a0142aa0ac1b1c8c"} Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.415984 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dzrd6" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.416134 4731 scope.go:117] "RemoveContainer" containerID="f6353d8ba163bf214b5d0715bac4569d51ee99df32797590988938d0ce3c72bd" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.429326 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9752677e-1420-41db-a27c-96995a7553b8-utilities\") pod \"9752677e-1420-41db-a27c-96995a7553b8\" (UID: \"9752677e-1420-41db-a27c-96995a7553b8\") " Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.429596 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9752677e-1420-41db-a27c-96995a7553b8-catalog-content\") pod \"9752677e-1420-41db-a27c-96995a7553b8\" (UID: \"9752677e-1420-41db-a27c-96995a7553b8\") " Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.429798 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs422\" (UniqueName: \"kubernetes.io/projected/9752677e-1420-41db-a27c-96995a7553b8-kube-api-access-xs422\") pod \"9752677e-1420-41db-a27c-96995a7553b8\" (UID: \"9752677e-1420-41db-a27c-96995a7553b8\") " Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.430267 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9752677e-1420-41db-a27c-96995a7553b8-utilities" (OuterVolumeSpecName: "utilities") pod "9752677e-1420-41db-a27c-96995a7553b8" (UID: "9752677e-1420-41db-a27c-96995a7553b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.432339 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9752677e-1420-41db-a27c-96995a7553b8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.437804 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9752677e-1420-41db-a27c-96995a7553b8-kube-api-access-xs422" (OuterVolumeSpecName: "kube-api-access-xs422") pod "9752677e-1420-41db-a27c-96995a7553b8" (UID: "9752677e-1420-41db-a27c-96995a7553b8"). InnerVolumeSpecName "kube-api-access-xs422". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.444147 4731 scope.go:117] "RemoveContainer" containerID="077d56a1cfb466b61117de39f91528740697c56c15c493f4ce75cf31b8cf230b" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.491109 4731 scope.go:117] "RemoveContainer" containerID="b4f9c33258387ad34298a546a15c1a531a121bffed73801817a46e497c702eac" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.527073 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9752677e-1420-41db-a27c-96995a7553b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9752677e-1420-41db-a27c-96995a7553b8" (UID: "9752677e-1420-41db-a27c-96995a7553b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.533680 4731 scope.go:117] "RemoveContainer" containerID="f6353d8ba163bf214b5d0715bac4569d51ee99df32797590988938d0ce3c72bd" Nov 25 15:17:27 crc kubenswrapper[4731]: E1125 15:17:27.534169 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6353d8ba163bf214b5d0715bac4569d51ee99df32797590988938d0ce3c72bd\": container with ID starting with f6353d8ba163bf214b5d0715bac4569d51ee99df32797590988938d0ce3c72bd not found: ID does not exist" containerID="f6353d8ba163bf214b5d0715bac4569d51ee99df32797590988938d0ce3c72bd" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.534338 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6353d8ba163bf214b5d0715bac4569d51ee99df32797590988938d0ce3c72bd"} err="failed to get container status \"f6353d8ba163bf214b5d0715bac4569d51ee99df32797590988938d0ce3c72bd\": rpc error: code = NotFound desc = could not find container \"f6353d8ba163bf214b5d0715bac4569d51ee99df32797590988938d0ce3c72bd\": container with ID starting with f6353d8ba163bf214b5d0715bac4569d51ee99df32797590988938d0ce3c72bd not found: ID does not exist" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.534581 4731 scope.go:117] "RemoveContainer" containerID="077d56a1cfb466b61117de39f91528740697c56c15c493f4ce75cf31b8cf230b" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.534839 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs422\" (UniqueName: \"kubernetes.io/projected/9752677e-1420-41db-a27c-96995a7553b8-kube-api-access-xs422\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.535006 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9752677e-1420-41db-a27c-96995a7553b8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:17:27 crc kubenswrapper[4731]: E1125 15:17:27.534971 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"077d56a1cfb466b61117de39f91528740697c56c15c493f4ce75cf31b8cf230b\": container with ID starting with 077d56a1cfb466b61117de39f91528740697c56c15c493f4ce75cf31b8cf230b not found: ID does not exist" containerID="077d56a1cfb466b61117de39f91528740697c56c15c493f4ce75cf31b8cf230b" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.535175 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"077d56a1cfb466b61117de39f91528740697c56c15c493f4ce75cf31b8cf230b"} err="failed to get container status \"077d56a1cfb466b61117de39f91528740697c56c15c493f4ce75cf31b8cf230b\": rpc error: code = NotFound desc = could not find container \"077d56a1cfb466b61117de39f91528740697c56c15c493f4ce75cf31b8cf230b\": container with ID starting with 077d56a1cfb466b61117de39f91528740697c56c15c493f4ce75cf31b8cf230b not found: ID does not exist" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.535249 4731 scope.go:117] "RemoveContainer" containerID="b4f9c33258387ad34298a546a15c1a531a121bffed73801817a46e497c702eac" Nov 25 15:17:27 crc kubenswrapper[4731]: E1125 15:17:27.536745 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4f9c33258387ad34298a546a15c1a531a121bffed73801817a46e497c702eac\": container with ID starting with b4f9c33258387ad34298a546a15c1a531a121bffed73801817a46e497c702eac not found: ID does not exist" containerID="b4f9c33258387ad34298a546a15c1a531a121bffed73801817a46e497c702eac" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.536875 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4f9c33258387ad34298a546a15c1a531a121bffed73801817a46e497c702eac"} err="failed to get container status \"b4f9c33258387ad34298a546a15c1a531a121bffed73801817a46e497c702eac\": rpc error: code = NotFound desc = could not find container \"b4f9c33258387ad34298a546a15c1a531a121bffed73801817a46e497c702eac\": container with ID starting with b4f9c33258387ad34298a546a15c1a531a121bffed73801817a46e497c702eac not found: ID does not exist" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.755103 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dzrd6"] Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.771903 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dzrd6"] Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.902701 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 15:17:27 crc kubenswrapper[4731]: I1125 15:17:27.902762 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Nov 25 15:17:28 crc kubenswrapper[4731]: I1125 15:17:28.822406 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9752677e-1420-41db-a27c-96995a7553b8" path="/var/lib/kubelet/pods/9752677e-1420-41db-a27c-96995a7553b8/volumes" Nov 25 15:17:31 crc kubenswrapper[4731]: I1125 15:17:31.835968 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 15:17:31 crc kubenswrapper[4731]: I1125 15:17:31.836573 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Nov 25 15:17:31 crc kubenswrapper[4731]: I1125 15:17:31.843564 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Nov 25 15:17:31 crc kubenswrapper[4731]: I1125 15:17:31.876151 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Nov 25 15:17:32 crc kubenswrapper[4731]: I1125 15:17:32.499054 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Nov 25 15:17:32 crc kubenswrapper[4731]: I1125 15:17:32.852214 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.211:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:17:32 crc kubenswrapper[4731]: I1125 15:17:32.852214 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.211:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:17:32 crc kubenswrapper[4731]: I1125 15:17:32.903092 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 15:17:32 crc kubenswrapper[4731]: I1125 15:17:32.903155 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Nov 25 15:17:33 crc kubenswrapper[4731]: I1125 15:17:33.916044 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="209fb0e3-5c2d-4e1d-b735-432bc80aa947" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 25 15:17:33 crc kubenswrapper[4731]: I1125 15:17:33.917101 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="209fb0e3-5c2d-4e1d-b735-432bc80aa947" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:17:34 crc kubenswrapper[4731]: I1125 15:17:34.496769 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 15:17:41 crc kubenswrapper[4731]: I1125 15:17:41.843100 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 15:17:41 crc kubenswrapper[4731]: I1125 15:17:41.844566 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 15:17:41 crc kubenswrapper[4731]: I1125 15:17:41.848201 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Nov 25 15:17:41 crc kubenswrapper[4731]: I1125 15:17:41.853278 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 15:17:42 crc kubenswrapper[4731]: I1125 15:17:42.593958 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Nov 25 15:17:42 crc kubenswrapper[4731]: I1125 15:17:42.601531 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Nov 25 15:17:42 crc kubenswrapper[4731]: I1125 15:17:42.908544 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 15:17:42 crc kubenswrapper[4731]: I1125 15:17:42.909801 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Nov 25 15:17:42 crc kubenswrapper[4731]: I1125 15:17:42.919325 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 15:17:43 crc kubenswrapper[4731]: I1125 15:17:43.609929 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Nov 25 15:17:52 crc kubenswrapper[4731]: I1125 15:17:52.707527 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:17:54 crc kubenswrapper[4731]: I1125 15:17:54.281943 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:17:57 crc kubenswrapper[4731]: I1125 15:17:57.138567 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f9c970e0-c27c-4a82-afd5-97aacf7536f9" containerName="rabbitmq" containerID="cri-o://be0f019b31f669736ec904676738a38ee89afb650c4b13cb677a2844987914b5" gracePeriod=604796 Nov 25 15:17:58 crc kubenswrapper[4731]: I1125 15:17:58.588759 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" containerName="rabbitmq" containerID="cri-o://cffa18d7573f9bd4c3eca77b02952116fe3df84f8087350296ea123736d44a08" gracePeriod=604796 Nov 25 15:18:03 crc kubenswrapper[4731]: I1125 15:18:03.802378 4731 generic.go:334] "Generic (PLEG): container finished" podID="f9c970e0-c27c-4a82-afd5-97aacf7536f9" containerID="be0f019b31f669736ec904676738a38ee89afb650c4b13cb677a2844987914b5" exitCode=0 Nov 25 15:18:03 crc kubenswrapper[4731]: I1125 15:18:03.802460 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f9c970e0-c27c-4a82-afd5-97aacf7536f9","Type":"ContainerDied","Data":"be0f019b31f669736ec904676738a38ee89afb650c4b13cb677a2844987914b5"} Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.616568 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.682169 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-server-conf\") pod \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.682603 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-plugins-conf\") pod \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.682639 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-tls\") pod \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.682748 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-erlang-cookie\") pod \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.682868 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-config-data\") pod \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.682890 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-plugins\") pod \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.682913 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnlzr\" (UniqueName: \"kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-kube-api-access-mnlzr\") pod \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.682958 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f9c970e0-c27c-4a82-afd5-97aacf7536f9-pod-info\") pod \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.682993 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-confd\") pod \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.683020 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.683062 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f9c970e0-c27c-4a82-afd5-97aacf7536f9-erlang-cookie-secret\") pod \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\" (UID: \"f9c970e0-c27c-4a82-afd5-97aacf7536f9\") " Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.683585 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f9c970e0-c27c-4a82-afd5-97aacf7536f9" (UID: "f9c970e0-c27c-4a82-afd5-97aacf7536f9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.686454 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f9c970e0-c27c-4a82-afd5-97aacf7536f9" (UID: "f9c970e0-c27c-4a82-afd5-97aacf7536f9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.687913 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f9c970e0-c27c-4a82-afd5-97aacf7536f9" (UID: "f9c970e0-c27c-4a82-afd5-97aacf7536f9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.690174 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f9c970e0-c27c-4a82-afd5-97aacf7536f9" (UID: "f9c970e0-c27c-4a82-afd5-97aacf7536f9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.692120 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f9c970e0-c27c-4a82-afd5-97aacf7536f9-pod-info" (OuterVolumeSpecName: "pod-info") pod "f9c970e0-c27c-4a82-afd5-97aacf7536f9" (UID: "f9c970e0-c27c-4a82-afd5-97aacf7536f9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.692452 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9c970e0-c27c-4a82-afd5-97aacf7536f9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f9c970e0-c27c-4a82-afd5-97aacf7536f9" (UID: "f9c970e0-c27c-4a82-afd5-97aacf7536f9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.696635 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-kube-api-access-mnlzr" (OuterVolumeSpecName: "kube-api-access-mnlzr") pod "f9c970e0-c27c-4a82-afd5-97aacf7536f9" (UID: "f9c970e0-c27c-4a82-afd5-97aacf7536f9"). InnerVolumeSpecName "kube-api-access-mnlzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.699426 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "f9c970e0-c27c-4a82-afd5-97aacf7536f9" (UID: "f9c970e0-c27c-4a82-afd5-97aacf7536f9"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.785383 4731 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.785417 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnlzr\" (UniqueName: \"kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-kube-api-access-mnlzr\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.785430 4731 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f9c970e0-c27c-4a82-afd5-97aacf7536f9-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.785452 4731 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.785464 4731 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f9c970e0-c27c-4a82-afd5-97aacf7536f9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.785473 4731 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.785483 4731 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.785494 4731 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.786174 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-config-data" (OuterVolumeSpecName: "config-data") pod "f9c970e0-c27c-4a82-afd5-97aacf7536f9" (UID: "f9c970e0-c27c-4a82-afd5-97aacf7536f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.811188 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-server-conf" (OuterVolumeSpecName: "server-conf") pod "f9c970e0-c27c-4a82-afd5-97aacf7536f9" (UID: "f9c970e0-c27c-4a82-afd5-97aacf7536f9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.817399 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.819382 4731 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.843896 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f9c970e0-c27c-4a82-afd5-97aacf7536f9" (UID: "f9c970e0-c27c-4a82-afd5-97aacf7536f9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.887363 4731 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.887395 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f9c970e0-c27c-4a82-afd5-97aacf7536f9-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.887404 4731 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f9c970e0-c27c-4a82-afd5-97aacf7536f9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.887418 4731 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.933164 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f9c970e0-c27c-4a82-afd5-97aacf7536f9","Type":"ContainerDied","Data":"54d79d86da21b9d0bda50ce35d49acb98e0d88148d6261692ad1bb332e58333c"} Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.933218 4731 scope.go:117] "RemoveContainer" containerID="be0f019b31f669736ec904676738a38ee89afb650c4b13cb677a2844987914b5" Nov 25 15:18:04 crc kubenswrapper[4731]: I1125 15:18:04.961051 4731 scope.go:117] "RemoveContainer" containerID="a801c820f5511e4c5db7b4c69c0cacb99eb6775c13be64ca3267bfeeb186738b" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.160663 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.171342 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.183830 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:18:05 crc kubenswrapper[4731]: E1125 15:18:05.184247 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9752677e-1420-41db-a27c-96995a7553b8" containerName="registry-server" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.184263 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9752677e-1420-41db-a27c-96995a7553b8" containerName="registry-server" Nov 25 15:18:05 crc kubenswrapper[4731]: E1125 15:18:05.184290 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9752677e-1420-41db-a27c-96995a7553b8" containerName="extract-content" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.184296 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9752677e-1420-41db-a27c-96995a7553b8" containerName="extract-content" Nov 25 15:18:05 crc kubenswrapper[4731]: E1125 15:18:05.184308 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c970e0-c27c-4a82-afd5-97aacf7536f9" containerName="setup-container" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.184314 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c970e0-c27c-4a82-afd5-97aacf7536f9" containerName="setup-container" Nov 25 15:18:05 crc kubenswrapper[4731]: E1125 15:18:05.184325 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c970e0-c27c-4a82-afd5-97aacf7536f9" containerName="rabbitmq" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.184331 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c970e0-c27c-4a82-afd5-97aacf7536f9" containerName="rabbitmq" Nov 25 15:18:05 crc kubenswrapper[4731]: E1125 15:18:05.184344 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9752677e-1420-41db-a27c-96995a7553b8" containerName="extract-utilities" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.184350 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9752677e-1420-41db-a27c-96995a7553b8" containerName="extract-utilities" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.184545 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c970e0-c27c-4a82-afd5-97aacf7536f9" containerName="rabbitmq" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.184562 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9752677e-1420-41db-a27c-96995a7553b8" containerName="registry-server" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.185736 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.189729 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.189775 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.189920 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.190046 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.190049 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.190158 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.190359 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-r6w67" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.201180 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.293414 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvjpt\" (UniqueName: \"kubernetes.io/projected/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-kube-api-access-bvjpt\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.293467 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.293499 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.293531 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.293549 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.293586 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-config-data\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.293643 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.293674 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.293734 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.293762 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.293786 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.304758 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.395397 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvjpt\" (UniqueName: \"kubernetes.io/projected/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-kube-api-access-bvjpt\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.395797 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.395847 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.395883 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.395912 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.395949 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-config-data\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.396006 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.396039 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.396097 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.396124 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.396150 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.396482 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.397052 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.397992 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.398488 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.398810 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-config-data\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.399630 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-server-conf\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.403252 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.404662 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.407687 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.407891 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.428121 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvjpt\" (UniqueName: \"kubernetes.io/projected/abe2a548-008b-4d3d-b0d1-d6e62dc15a4d-kube-api-access-bvjpt\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.448209 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-server-0\" (UID: \"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d\") " pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.517548 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.683572 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.805118 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.805196 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-plugins-conf\") pod \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.805228 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-tls\") pod \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.805528 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-server-conf\") pod \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.805570 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-pod-info\") pod \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.805638 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-erlang-cookie-secret\") pod \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.805677 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-erlang-cookie\") pod \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.805729 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdjv8\" (UniqueName: \"kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-kube-api-access-gdjv8\") pod \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.805753 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-plugins\") pod \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.805842 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-config-data\") pod \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.805893 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-confd\") pod \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\" (UID: \"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8\") " Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.807665 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" (UID: "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.808532 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" (UID: "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.808913 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" (UID: "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.825655 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-kube-api-access-gdjv8" (OuterVolumeSpecName: "kube-api-access-gdjv8") pod "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" (UID: "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8"). InnerVolumeSpecName "kube-api-access-gdjv8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.829624 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" (UID: "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.834647 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" (UID: "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.836141 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-pod-info" (OuterVolumeSpecName: "pod-info") pod "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" (UID: "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.836332 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" (UID: "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.842693 4731 generic.go:334] "Generic (PLEG): container finished" podID="08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" containerID="cffa18d7573f9bd4c3eca77b02952116fe3df84f8087350296ea123736d44a08" exitCode=0 Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.842780 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8","Type":"ContainerDied","Data":"cffa18d7573f9bd4c3eca77b02952116fe3df84f8087350296ea123736d44a08"} Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.842810 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8","Type":"ContainerDied","Data":"5c568335f47f64342823a07158b91e5ec364d5802789814de7c3688c1af93ef7"} Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.842847 4731 scope.go:117] "RemoveContainer" containerID="cffa18d7573f9bd4c3eca77b02952116fe3df84f8087350296ea123736d44a08" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.842990 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.904299 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-config-data" (OuterVolumeSpecName: "config-data") pod "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" (UID: "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.908773 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.908845 4731 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.908861 4731 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-plugins-conf\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.908877 4731 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.908889 4731 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-pod-info\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.908901 4731 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.908917 4731 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.908931 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdjv8\" (UniqueName: \"kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-kube-api-access-gdjv8\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.908944 4731 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.944407 4731 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.951771 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-server-conf" (OuterVolumeSpecName: "server-conf") pod "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" (UID: "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:05 crc kubenswrapper[4731]: I1125 15:18:05.991468 4731 scope.go:117] "RemoveContainer" containerID="749d7a65e9c7e0bf8844ce25e19f271ac52796eca22ae038cda0ede819d4c3f0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.002556 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" (UID: "08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.011067 4731 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-server-conf\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.011109 4731 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.011123 4731 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.020283 4731 scope.go:117] "RemoveContainer" containerID="cffa18d7573f9bd4c3eca77b02952116fe3df84f8087350296ea123736d44a08" Nov 25 15:18:06 crc kubenswrapper[4731]: E1125 15:18:06.020933 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cffa18d7573f9bd4c3eca77b02952116fe3df84f8087350296ea123736d44a08\": container with ID starting with cffa18d7573f9bd4c3eca77b02952116fe3df84f8087350296ea123736d44a08 not found: ID does not exist" containerID="cffa18d7573f9bd4c3eca77b02952116fe3df84f8087350296ea123736d44a08" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.021101 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cffa18d7573f9bd4c3eca77b02952116fe3df84f8087350296ea123736d44a08"} err="failed to get container status \"cffa18d7573f9bd4c3eca77b02952116fe3df84f8087350296ea123736d44a08\": rpc error: code = NotFound desc = could not find container \"cffa18d7573f9bd4c3eca77b02952116fe3df84f8087350296ea123736d44a08\": container with ID starting with cffa18d7573f9bd4c3eca77b02952116fe3df84f8087350296ea123736d44a08 not found: ID does not exist" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.021167 4731 scope.go:117] "RemoveContainer" containerID="749d7a65e9c7e0bf8844ce25e19f271ac52796eca22ae038cda0ede819d4c3f0" Nov 25 15:18:06 crc kubenswrapper[4731]: E1125 15:18:06.021918 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"749d7a65e9c7e0bf8844ce25e19f271ac52796eca22ae038cda0ede819d4c3f0\": container with ID starting with 749d7a65e9c7e0bf8844ce25e19f271ac52796eca22ae038cda0ede819d4c3f0 not found: ID does not exist" containerID="749d7a65e9c7e0bf8844ce25e19f271ac52796eca22ae038cda0ede819d4c3f0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.021982 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"749d7a65e9c7e0bf8844ce25e19f271ac52796eca22ae038cda0ede819d4c3f0"} err="failed to get container status \"749d7a65e9c7e0bf8844ce25e19f271ac52796eca22ae038cda0ede819d4c3f0\": rpc error: code = NotFound desc = could not find container \"749d7a65e9c7e0bf8844ce25e19f271ac52796eca22ae038cda0ede819d4c3f0\": container with ID starting with 749d7a65e9c7e0bf8844ce25e19f271ac52796eca22ae038cda0ede819d4c3f0 not found: ID does not exist" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.094734 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.203553 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.215585 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.229454 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:18:06 crc kubenswrapper[4731]: E1125 15:18:06.230234 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" containerName="rabbitmq" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.230257 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" containerName="rabbitmq" Nov 25 15:18:06 crc kubenswrapper[4731]: E1125 15:18:06.230300 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" containerName="setup-container" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.230310 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" containerName="setup-container" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.230565 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" containerName="rabbitmq" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.232130 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.237946 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.238268 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.238562 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.238850 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.238984 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.239096 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.239213 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-hmmsg" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.243924 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.316429 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.316517 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9902e27-284b-4015-994b-9b27f3254c5d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.316598 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8cn9\" (UniqueName: \"kubernetes.io/projected/c9902e27-284b-4015-994b-9b27f3254c5d-kube-api-access-h8cn9\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.316641 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9902e27-284b-4015-994b-9b27f3254c5d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.316718 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9902e27-284b-4015-994b-9b27f3254c5d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.316741 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9902e27-284b-4015-994b-9b27f3254c5d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.316868 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9902e27-284b-4015-994b-9b27f3254c5d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.316967 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9902e27-284b-4015-994b-9b27f3254c5d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.317328 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9902e27-284b-4015-994b-9b27f3254c5d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.317374 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9902e27-284b-4015-994b-9b27f3254c5d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.317407 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9902e27-284b-4015-994b-9b27f3254c5d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.418570 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9902e27-284b-4015-994b-9b27f3254c5d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.418626 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9902e27-284b-4015-994b-9b27f3254c5d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.418682 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.418721 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9902e27-284b-4015-994b-9b27f3254c5d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.418761 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8cn9\" (UniqueName: \"kubernetes.io/projected/c9902e27-284b-4015-994b-9b27f3254c5d-kube-api-access-h8cn9\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.418784 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9902e27-284b-4015-994b-9b27f3254c5d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.418848 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9902e27-284b-4015-994b-9b27f3254c5d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.418874 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9902e27-284b-4015-994b-9b27f3254c5d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.418899 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9902e27-284b-4015-994b-9b27f3254c5d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.418948 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9902e27-284b-4015-994b-9b27f3254c5d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.419025 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9902e27-284b-4015-994b-9b27f3254c5d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.419529 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c9902e27-284b-4015-994b-9b27f3254c5d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.419613 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c9902e27-284b-4015-994b-9b27f3254c5d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.420086 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c9902e27-284b-4015-994b-9b27f3254c5d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.420132 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c9902e27-284b-4015-994b-9b27f3254c5d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.420384 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.420439 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c9902e27-284b-4015-994b-9b27f3254c5d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.424361 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c9902e27-284b-4015-994b-9b27f3254c5d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.424394 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c9902e27-284b-4015-994b-9b27f3254c5d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.424409 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c9902e27-284b-4015-994b-9b27f3254c5d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.424858 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c9902e27-284b-4015-994b-9b27f3254c5d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.440089 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8cn9\" (UniqueName: \"kubernetes.io/projected/c9902e27-284b-4015-994b-9b27f3254c5d-kube-api-access-h8cn9\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.453962 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c9902e27-284b-4015-994b-9b27f3254c5d\") " pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:06 crc kubenswrapper[4731]: I1125 15:18:06.552260 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:07 crc kubenswrapper[4731]: I1125 15:18:06.828189 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8" path="/var/lib/kubelet/pods/08b3ef48-58e1-4da1-a3ff-ecff7f3b19e8/volumes" Nov 25 15:18:07 crc kubenswrapper[4731]: I1125 15:18:06.829573 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9c970e0-c27c-4a82-afd5-97aacf7536f9" path="/var/lib/kubelet/pods/f9c970e0-c27c-4a82-afd5-97aacf7536f9/volumes" Nov 25 15:18:07 crc kubenswrapper[4731]: I1125 15:18:06.867937 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d","Type":"ContainerStarted","Data":"3d2e1ebaf93877368b4433f171ccce653900c711d1ae47359ceda8931b82aa6a"} Nov 25 15:18:07 crc kubenswrapper[4731]: I1125 15:18:07.008991 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Nov 25 15:18:07 crc kubenswrapper[4731]: W1125 15:18:07.012585 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9902e27_284b_4015_994b_9b27f3254c5d.slice/crio-2f6ac244da4aa87613803ad2467bbafa4a6f7dd35344cad2c94d1041a58f3f18 WatchSource:0}: Error finding container 2f6ac244da4aa87613803ad2467bbafa4a6f7dd35344cad2c94d1041a58f3f18: Status 404 returned error can't find the container with id 2f6ac244da4aa87613803ad2467bbafa4a6f7dd35344cad2c94d1041a58f3f18 Nov 25 15:18:07 crc kubenswrapper[4731]: I1125 15:18:07.880039 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c9902e27-284b-4015-994b-9b27f3254c5d","Type":"ContainerStarted","Data":"2f6ac244da4aa87613803ad2467bbafa4a6f7dd35344cad2c94d1041a58f3f18"} Nov 25 15:18:07 crc kubenswrapper[4731]: I1125 15:18:07.881825 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d","Type":"ContainerStarted","Data":"6a00b9980993e97e8767f03c43ad6c527fb7d94f182d08b2ffc9e7ba48dc3359"} Nov 25 15:18:09 crc kubenswrapper[4731]: I1125 15:18:09.902199 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c9902e27-284b-4015-994b-9b27f3254c5d","Type":"ContainerStarted","Data":"c44a2a52cf7cf3d68c185a78618dfa957c48ba467d123d73915aebc5384f97b7"} Nov 25 15:18:12 crc kubenswrapper[4731]: I1125 15:18:12.984775 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-lndbc"] Nov 25 15:18:12 crc kubenswrapper[4731]: I1125 15:18:12.986802 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:12 crc kubenswrapper[4731]: I1125 15:18:12.988594 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.025758 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-lndbc"] Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.049592 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68r5h\" (UniqueName: \"kubernetes.io/projected/22b13e6e-2ced-4c6c-b584-ffaf76896534-kube-api-access-68r5h\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.049716 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.049763 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.049876 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.049910 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.049936 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-config\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.049967 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.152116 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.152172 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.152202 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-config\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.152230 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.152279 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68r5h\" (UniqueName: \"kubernetes.io/projected/22b13e6e-2ced-4c6c-b584-ffaf76896534-kube-api-access-68r5h\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.152308 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.152341 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.153294 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.153556 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.153618 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.153834 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-config\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.153850 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.154110 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.174954 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68r5h\" (UniqueName: \"kubernetes.io/projected/22b13e6e-2ced-4c6c-b584-ffaf76896534-kube-api-access-68r5h\") pod \"dnsmasq-dns-5b75489c6f-lndbc\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.307565 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.819766 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-lndbc"] Nov 25 15:18:13 crc kubenswrapper[4731]: I1125 15:18:13.949915 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" event={"ID":"22b13e6e-2ced-4c6c-b584-ffaf76896534","Type":"ContainerStarted","Data":"93b65cb0a51357740c36b7aaf030296d33f44aa853b413ba34cdd9c27292c755"} Nov 25 15:18:14 crc kubenswrapper[4731]: I1125 15:18:14.960449 4731 generic.go:334] "Generic (PLEG): container finished" podID="22b13e6e-2ced-4c6c-b584-ffaf76896534" containerID="43efff9d4184c210b36d71f6ab909807a3ad1e0f7f5c4ec9126fd385ef4b2c27" exitCode=0 Nov 25 15:18:14 crc kubenswrapper[4731]: I1125 15:18:14.960533 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" event={"ID":"22b13e6e-2ced-4c6c-b584-ffaf76896534","Type":"ContainerDied","Data":"43efff9d4184c210b36d71f6ab909807a3ad1e0f7f5c4ec9126fd385ef4b2c27"} Nov 25 15:18:15 crc kubenswrapper[4731]: I1125 15:18:15.973690 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" event={"ID":"22b13e6e-2ced-4c6c-b584-ffaf76896534","Type":"ContainerStarted","Data":"5a0e6b98aeefafbe6b3ffdc0c2c417977aa0fb633c54c2a76e4f75b8e750f87c"} Nov 25 15:18:15 crc kubenswrapper[4731]: I1125 15:18:15.974109 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:16 crc kubenswrapper[4731]: I1125 15:18:16.015457 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" podStartSLOduration=4.015395653 podStartE2EDuration="4.015395653s" podCreationTimestamp="2025-11-25 15:18:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:18:15.996910427 +0000 UTC m=+1454.744044857" watchObservedRunningTime="2025-11-25 15:18:16.015395653 +0000 UTC m=+1454.762530163" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.309216 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.392736 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-q2h5f"] Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.393407 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" podUID="1586ab0c-bd05-40c9-80f3-7d43e8ef3dee" containerName="dnsmasq-dns" containerID="cri-o://14edbb1526f342225e8bb4a48d6e77e2db598fce1e09dea6d28510f03bd97d81" gracePeriod=10 Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.532789 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-n9ptm"] Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.551035 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-n9ptm"] Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.551159 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.693851 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.693898 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-config\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.693950 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.694189 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhk8v\" (UniqueName: \"kubernetes.io/projected/fb99c57e-0890-4b05-9072-311048298a07-kube-api-access-vhk8v\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.694276 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.694565 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.694660 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.796035 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhk8v\" (UniqueName: \"kubernetes.io/projected/fb99c57e-0890-4b05-9072-311048298a07-kube-api-access-vhk8v\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.796276 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.796458 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.796537 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.796628 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.796693 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-config\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.796765 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.797353 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.798186 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.798466 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.798808 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.798865 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.799307 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb99c57e-0890-4b05-9072-311048298a07-config\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.816739 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhk8v\" (UniqueName: \"kubernetes.io/projected/fb99c57e-0890-4b05-9072-311048298a07-kube-api-access-vhk8v\") pod \"dnsmasq-dns-5d75f767dc-n9ptm\" (UID: \"fb99c57e-0890-4b05-9072-311048298a07\") " pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:23 crc kubenswrapper[4731]: I1125 15:18:23.936034 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.092898 4731 generic.go:334] "Generic (PLEG): container finished" podID="1586ab0c-bd05-40c9-80f3-7d43e8ef3dee" containerID="14edbb1526f342225e8bb4a48d6e77e2db598fce1e09dea6d28510f03bd97d81" exitCode=0 Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.092962 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" event={"ID":"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee","Type":"ContainerDied","Data":"14edbb1526f342225e8bb4a48d6e77e2db598fce1e09dea6d28510f03bd97d81"} Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.421526 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-n9ptm"] Nov 25 15:18:24 crc kubenswrapper[4731]: W1125 15:18:24.422230 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb99c57e_0890_4b05_9072_311048298a07.slice/crio-7d377ee8f5713ebbfda4a96d0bdb001af075974ee42b1bad83955d9888938d35 WatchSource:0}: Error finding container 7d377ee8f5713ebbfda4a96d0bdb001af075974ee42b1bad83955d9888938d35: Status 404 returned error can't find the container with id 7d377ee8f5713ebbfda4a96d0bdb001af075974ee42b1bad83955d9888938d35 Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.560459 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.617784 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-config\") pod \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.618119 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-dns-swift-storage-0\") pod \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.618186 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-dns-svc\") pod \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.618350 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6gmrn\" (UniqueName: \"kubernetes.io/projected/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-kube-api-access-6gmrn\") pod \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.618464 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-ovsdbserver-sb\") pod \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.618513 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-ovsdbserver-nb\") pod \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\" (UID: \"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee\") " Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.643148 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-kube-api-access-6gmrn" (OuterVolumeSpecName: "kube-api-access-6gmrn") pod "1586ab0c-bd05-40c9-80f3-7d43e8ef3dee" (UID: "1586ab0c-bd05-40c9-80f3-7d43e8ef3dee"). InnerVolumeSpecName "kube-api-access-6gmrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.687573 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1586ab0c-bd05-40c9-80f3-7d43e8ef3dee" (UID: "1586ab0c-bd05-40c9-80f3-7d43e8ef3dee"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.694004 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-config" (OuterVolumeSpecName: "config") pod "1586ab0c-bd05-40c9-80f3-7d43e8ef3dee" (UID: "1586ab0c-bd05-40c9-80f3-7d43e8ef3dee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.702630 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1586ab0c-bd05-40c9-80f3-7d43e8ef3dee" (UID: "1586ab0c-bd05-40c9-80f3-7d43e8ef3dee"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.703174 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1586ab0c-bd05-40c9-80f3-7d43e8ef3dee" (UID: "1586ab0c-bd05-40c9-80f3-7d43e8ef3dee"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.704519 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1586ab0c-bd05-40c9-80f3-7d43e8ef3dee" (UID: "1586ab0c-bd05-40c9-80f3-7d43e8ef3dee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.722421 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.722457 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.722468 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.722477 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.722485 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6gmrn\" (UniqueName: \"kubernetes.io/projected/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-kube-api-access-6gmrn\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:24 crc kubenswrapper[4731]: I1125 15:18:24.722497 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:25 crc kubenswrapper[4731]: I1125 15:18:25.102742 4731 generic.go:334] "Generic (PLEG): container finished" podID="fb99c57e-0890-4b05-9072-311048298a07" containerID="ed92211dbac2b78d87592e294543ff7497c8925eb69c78821812bba630d6a7ac" exitCode=0 Nov 25 15:18:25 crc kubenswrapper[4731]: I1125 15:18:25.102794 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" event={"ID":"fb99c57e-0890-4b05-9072-311048298a07","Type":"ContainerDied","Data":"ed92211dbac2b78d87592e294543ff7497c8925eb69c78821812bba630d6a7ac"} Nov 25 15:18:25 crc kubenswrapper[4731]: I1125 15:18:25.102833 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" event={"ID":"fb99c57e-0890-4b05-9072-311048298a07","Type":"ContainerStarted","Data":"7d377ee8f5713ebbfda4a96d0bdb001af075974ee42b1bad83955d9888938d35"} Nov 25 15:18:25 crc kubenswrapper[4731]: I1125 15:18:25.106576 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" event={"ID":"1586ab0c-bd05-40c9-80f3-7d43e8ef3dee","Type":"ContainerDied","Data":"e706099891e1470d7d827f8af9b30e7f6173786030a0636d0679f765ede8ca6c"} Nov 25 15:18:25 crc kubenswrapper[4731]: I1125 15:18:25.106621 4731 scope.go:117] "RemoveContainer" containerID="14edbb1526f342225e8bb4a48d6e77e2db598fce1e09dea6d28510f03bd97d81" Nov 25 15:18:25 crc kubenswrapper[4731]: I1125 15:18:25.106630 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-q2h5f" Nov 25 15:18:25 crc kubenswrapper[4731]: I1125 15:18:25.192965 4731 scope.go:117] "RemoveContainer" containerID="deef86bfab59b86efcd7f4fd4bc95ed18be374d9085529deddf629a416572939" Nov 25 15:18:25 crc kubenswrapper[4731]: I1125 15:18:25.201094 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-q2h5f"] Nov 25 15:18:25 crc kubenswrapper[4731]: I1125 15:18:25.211522 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-q2h5f"] Nov 25 15:18:26 crc kubenswrapper[4731]: I1125 15:18:26.117852 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" event={"ID":"fb99c57e-0890-4b05-9072-311048298a07","Type":"ContainerStarted","Data":"f456aa8790116412eefc22e154e63d18092a7bec87edbf235c7b20dcc770e9f4"} Nov 25 15:18:26 crc kubenswrapper[4731]: I1125 15:18:26.118327 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:26 crc kubenswrapper[4731]: I1125 15:18:26.140830 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" podStartSLOduration=3.140797432 podStartE2EDuration="3.140797432s" podCreationTimestamp="2025-11-25 15:18:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:18:26.139723185 +0000 UTC m=+1464.886857625" watchObservedRunningTime="2025-11-25 15:18:26.140797432 +0000 UTC m=+1464.887931862" Nov 25 15:18:26 crc kubenswrapper[4731]: I1125 15:18:26.824735 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1586ab0c-bd05-40c9-80f3-7d43e8ef3dee" path="/var/lib/kubelet/pods/1586ab0c-bd05-40c9-80f3-7d43e8ef3dee/volumes" Nov 25 15:18:33 crc kubenswrapper[4731]: I1125 15:18:33.938109 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-n9ptm" Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.021283 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-lndbc"] Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.021588 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" podUID="22b13e6e-2ced-4c6c-b584-ffaf76896534" containerName="dnsmasq-dns" containerID="cri-o://5a0e6b98aeefafbe6b3ffdc0c2c417977aa0fb633c54c2a76e4f75b8e750f87c" gracePeriod=10 Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.205009 4731 generic.go:334] "Generic (PLEG): container finished" podID="22b13e6e-2ced-4c6c-b584-ffaf76896534" containerID="5a0e6b98aeefafbe6b3ffdc0c2c417977aa0fb633c54c2a76e4f75b8e750f87c" exitCode=0 Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.205062 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" event={"ID":"22b13e6e-2ced-4c6c-b584-ffaf76896534","Type":"ContainerDied","Data":"5a0e6b98aeefafbe6b3ffdc0c2c417977aa0fb633c54c2a76e4f75b8e750f87c"} Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.487331 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.544254 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-ovsdbserver-sb\") pod \"22b13e6e-2ced-4c6c-b584-ffaf76896534\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.544368 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-dns-swift-storage-0\") pod \"22b13e6e-2ced-4c6c-b584-ffaf76896534\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.544434 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-dns-svc\") pod \"22b13e6e-2ced-4c6c-b584-ffaf76896534\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.544505 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68r5h\" (UniqueName: \"kubernetes.io/projected/22b13e6e-2ced-4c6c-b584-ffaf76896534-kube-api-access-68r5h\") pod \"22b13e6e-2ced-4c6c-b584-ffaf76896534\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.544583 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-config\") pod \"22b13e6e-2ced-4c6c-b584-ffaf76896534\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.544635 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-openstack-edpm-ipam\") pod \"22b13e6e-2ced-4c6c-b584-ffaf76896534\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.544673 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-ovsdbserver-nb\") pod \"22b13e6e-2ced-4c6c-b584-ffaf76896534\" (UID: \"22b13e6e-2ced-4c6c-b584-ffaf76896534\") " Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.556417 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22b13e6e-2ced-4c6c-b584-ffaf76896534-kube-api-access-68r5h" (OuterVolumeSpecName: "kube-api-access-68r5h") pod "22b13e6e-2ced-4c6c-b584-ffaf76896534" (UID: "22b13e6e-2ced-4c6c-b584-ffaf76896534"). InnerVolumeSpecName "kube-api-access-68r5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.598538 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "22b13e6e-2ced-4c6c-b584-ffaf76896534" (UID: "22b13e6e-2ced-4c6c-b584-ffaf76896534"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.602212 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "22b13e6e-2ced-4c6c-b584-ffaf76896534" (UID: "22b13e6e-2ced-4c6c-b584-ffaf76896534"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.603674 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-config" (OuterVolumeSpecName: "config") pod "22b13e6e-2ced-4c6c-b584-ffaf76896534" (UID: "22b13e6e-2ced-4c6c-b584-ffaf76896534"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.620041 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "22b13e6e-2ced-4c6c-b584-ffaf76896534" (UID: "22b13e6e-2ced-4c6c-b584-ffaf76896534"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.622100 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "22b13e6e-2ced-4c6c-b584-ffaf76896534" (UID: "22b13e6e-2ced-4c6c-b584-ffaf76896534"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.626485 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "22b13e6e-2ced-4c6c-b584-ffaf76896534" (UID: "22b13e6e-2ced-4c6c-b584-ffaf76896534"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.648030 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68r5h\" (UniqueName: \"kubernetes.io/projected/22b13e6e-2ced-4c6c-b584-ffaf76896534-kube-api-access-68r5h\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.648076 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.648088 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.648099 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.648109 4731 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.648117 4731 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:34 crc kubenswrapper[4731]: I1125 15:18:34.648129 4731 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/22b13e6e-2ced-4c6c-b584-ffaf76896534-dns-svc\") on node \"crc\" DevicePath \"\"" Nov 25 15:18:35 crc kubenswrapper[4731]: I1125 15:18:35.224251 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" event={"ID":"22b13e6e-2ced-4c6c-b584-ffaf76896534","Type":"ContainerDied","Data":"93b65cb0a51357740c36b7aaf030296d33f44aa853b413ba34cdd9c27292c755"} Nov 25 15:18:35 crc kubenswrapper[4731]: I1125 15:18:35.224778 4731 scope.go:117] "RemoveContainer" containerID="5a0e6b98aeefafbe6b3ffdc0c2c417977aa0fb633c54c2a76e4f75b8e750f87c" Nov 25 15:18:35 crc kubenswrapper[4731]: I1125 15:18:35.225116 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-lndbc" Nov 25 15:18:35 crc kubenswrapper[4731]: I1125 15:18:35.259395 4731 scope.go:117] "RemoveContainer" containerID="43efff9d4184c210b36d71f6ab909807a3ad1e0f7f5c4ec9126fd385ef4b2c27" Nov 25 15:18:35 crc kubenswrapper[4731]: I1125 15:18:35.263060 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-lndbc"] Nov 25 15:18:35 crc kubenswrapper[4731]: I1125 15:18:35.287804 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-lndbc"] Nov 25 15:18:36 crc kubenswrapper[4731]: I1125 15:18:36.837182 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22b13e6e-2ced-4c6c-b584-ffaf76896534" path="/var/lib/kubelet/pods/22b13e6e-2ced-4c6c-b584-ffaf76896534/volumes" Nov 25 15:18:40 crc kubenswrapper[4731]: I1125 15:18:40.287216 4731 generic.go:334] "Generic (PLEG): container finished" podID="abe2a548-008b-4d3d-b0d1-d6e62dc15a4d" containerID="6a00b9980993e97e8767f03c43ad6c527fb7d94f182d08b2ffc9e7ba48dc3359" exitCode=0 Nov 25 15:18:40 crc kubenswrapper[4731]: I1125 15:18:40.287363 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d","Type":"ContainerDied","Data":"6a00b9980993e97e8767f03c43ad6c527fb7d94f182d08b2ffc9e7ba48dc3359"} Nov 25 15:18:41 crc kubenswrapper[4731]: I1125 15:18:41.302179 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"abe2a548-008b-4d3d-b0d1-d6e62dc15a4d","Type":"ContainerStarted","Data":"13c1019c6fe4082c7f2f0fa73a71417c93cdaaa143eeabf75c39c8d4d8efa695"} Nov 25 15:18:41 crc kubenswrapper[4731]: I1125 15:18:41.302965 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Nov 25 15:18:41 crc kubenswrapper[4731]: I1125 15:18:41.329256 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.329235962 podStartE2EDuration="36.329235962s" podCreationTimestamp="2025-11-25 15:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:18:41.325012776 +0000 UTC m=+1480.072147226" watchObservedRunningTime="2025-11-25 15:18:41.329235962 +0000 UTC m=+1480.076370392" Nov 25 15:18:42 crc kubenswrapper[4731]: I1125 15:18:42.314087 4731 generic.go:334] "Generic (PLEG): container finished" podID="c9902e27-284b-4015-994b-9b27f3254c5d" containerID="c44a2a52cf7cf3d68c185a78618dfa957c48ba467d123d73915aebc5384f97b7" exitCode=0 Nov 25 15:18:42 crc kubenswrapper[4731]: I1125 15:18:42.314176 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c9902e27-284b-4015-994b-9b27f3254c5d","Type":"ContainerDied","Data":"c44a2a52cf7cf3d68c185a78618dfa957c48ba467d123d73915aebc5384f97b7"} Nov 25 15:18:43 crc kubenswrapper[4731]: I1125 15:18:43.326703 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c9902e27-284b-4015-994b-9b27f3254c5d","Type":"ContainerStarted","Data":"c50492f5d88324fa4b19d79677a9c44dcebfb59c94f1afdafa126c0e33c71f03"} Nov 25 15:18:43 crc kubenswrapper[4731]: I1125 15:18:43.327351 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:18:43 crc kubenswrapper[4731]: I1125 15:18:43.354332 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.354314185 podStartE2EDuration="37.354314185s" podCreationTimestamp="2025-11-25 15:18:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:18:43.346223901 +0000 UTC m=+1482.093358351" watchObservedRunningTime="2025-11-25 15:18:43.354314185 +0000 UTC m=+1482.101448615" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.356040 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq"] Nov 25 15:18:47 crc kubenswrapper[4731]: E1125 15:18:47.357139 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1586ab0c-bd05-40c9-80f3-7d43e8ef3dee" containerName="init" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.357159 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1586ab0c-bd05-40c9-80f3-7d43e8ef3dee" containerName="init" Nov 25 15:18:47 crc kubenswrapper[4731]: E1125 15:18:47.357182 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22b13e6e-2ced-4c6c-b584-ffaf76896534" containerName="dnsmasq-dns" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.357207 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="22b13e6e-2ced-4c6c-b584-ffaf76896534" containerName="dnsmasq-dns" Nov 25 15:18:47 crc kubenswrapper[4731]: E1125 15:18:47.357224 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22b13e6e-2ced-4c6c-b584-ffaf76896534" containerName="init" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.357232 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="22b13e6e-2ced-4c6c-b584-ffaf76896534" containerName="init" Nov 25 15:18:47 crc kubenswrapper[4731]: E1125 15:18:47.357265 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1586ab0c-bd05-40c9-80f3-7d43e8ef3dee" containerName="dnsmasq-dns" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.357272 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1586ab0c-bd05-40c9-80f3-7d43e8ef3dee" containerName="dnsmasq-dns" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.357517 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="22b13e6e-2ced-4c6c-b584-ffaf76896534" containerName="dnsmasq-dns" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.357546 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="1586ab0c-bd05-40c9-80f3-7d43e8ef3dee" containerName="dnsmasq-dns" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.358344 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.361455 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.361791 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.363178 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.364693 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.385363 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq"] Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.423121 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq\" (UID: \"35e801a7-1cfb-402e-9194-9e25042d0d85\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.423949 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq\" (UID: \"35e801a7-1cfb-402e-9194-9e25042d0d85\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.424215 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq\" (UID: \"35e801a7-1cfb-402e-9194-9e25042d0d85\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.424299 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxt8\" (UniqueName: \"kubernetes.io/projected/35e801a7-1cfb-402e-9194-9e25042d0d85-kube-api-access-mwxt8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq\" (UID: \"35e801a7-1cfb-402e-9194-9e25042d0d85\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.526607 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq\" (UID: \"35e801a7-1cfb-402e-9194-9e25042d0d85\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.526692 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq\" (UID: \"35e801a7-1cfb-402e-9194-9e25042d0d85\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.526722 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwxt8\" (UniqueName: \"kubernetes.io/projected/35e801a7-1cfb-402e-9194-9e25042d0d85-kube-api-access-mwxt8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq\" (UID: \"35e801a7-1cfb-402e-9194-9e25042d0d85\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.526780 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq\" (UID: \"35e801a7-1cfb-402e-9194-9e25042d0d85\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.533380 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq\" (UID: \"35e801a7-1cfb-402e-9194-9e25042d0d85\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.533775 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq\" (UID: \"35e801a7-1cfb-402e-9194-9e25042d0d85\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.546106 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq\" (UID: \"35e801a7-1cfb-402e-9194-9e25042d0d85\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.546779 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwxt8\" (UniqueName: \"kubernetes.io/projected/35e801a7-1cfb-402e-9194-9e25042d0d85-kube-api-access-mwxt8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq\" (UID: \"35e801a7-1cfb-402e-9194-9e25042d0d85\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" Nov 25 15:18:47 crc kubenswrapper[4731]: I1125 15:18:47.689355 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" Nov 25 15:18:48 crc kubenswrapper[4731]: I1125 15:18:48.212122 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq"] Nov 25 15:18:48 crc kubenswrapper[4731]: I1125 15:18:48.373156 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" event={"ID":"35e801a7-1cfb-402e-9194-9e25042d0d85","Type":"ContainerStarted","Data":"e45c428dd5b0f70db2c0bdb9b185201f368d4a2268cfc9d6520b12a6f44c9d0b"} Nov 25 15:18:55 crc kubenswrapper[4731]: I1125 15:18:55.526039 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Nov 25 15:18:56 crc kubenswrapper[4731]: I1125 15:18:56.556065 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Nov 25 15:19:02 crc kubenswrapper[4731]: I1125 15:19:02.516438 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" event={"ID":"35e801a7-1cfb-402e-9194-9e25042d0d85","Type":"ContainerStarted","Data":"105b2eb61271ea34caa94265c5e1c5e422ae27f720355bac12c4847d72124474"} Nov 25 15:19:02 crc kubenswrapper[4731]: I1125 15:19:02.536142 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" podStartSLOduration=1.748246653 podStartE2EDuration="15.536125754s" podCreationTimestamp="2025-11-25 15:18:47 +0000 UTC" firstStartedPulling="2025-11-25 15:18:48.218714531 +0000 UTC m=+1486.965848961" lastFinishedPulling="2025-11-25 15:19:02.006593632 +0000 UTC m=+1500.753728062" observedRunningTime="2025-11-25 15:19:02.531855444 +0000 UTC m=+1501.278989894" watchObservedRunningTime="2025-11-25 15:19:02.536125754 +0000 UTC m=+1501.283260184" Nov 25 15:19:07 crc kubenswrapper[4731]: I1125 15:19:07.029406 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wnb2m"] Nov 25 15:19:07 crc kubenswrapper[4731]: I1125 15:19:07.033958 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:07 crc kubenswrapper[4731]: I1125 15:19:07.059467 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wnb2m"] Nov 25 15:19:07 crc kubenswrapper[4731]: I1125 15:19:07.146641 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a2a194-d391-4572-aadc-a05524d2b134-catalog-content\") pod \"certified-operators-wnb2m\" (UID: \"89a2a194-d391-4572-aadc-a05524d2b134\") " pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:07 crc kubenswrapper[4731]: I1125 15:19:07.146693 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a2a194-d391-4572-aadc-a05524d2b134-utilities\") pod \"certified-operators-wnb2m\" (UID: \"89a2a194-d391-4572-aadc-a05524d2b134\") " pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:07 crc kubenswrapper[4731]: I1125 15:19:07.146715 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ndqr\" (UniqueName: \"kubernetes.io/projected/89a2a194-d391-4572-aadc-a05524d2b134-kube-api-access-4ndqr\") pod \"certified-operators-wnb2m\" (UID: \"89a2a194-d391-4572-aadc-a05524d2b134\") " pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:07 crc kubenswrapper[4731]: I1125 15:19:07.249186 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a2a194-d391-4572-aadc-a05524d2b134-catalog-content\") pod \"certified-operators-wnb2m\" (UID: \"89a2a194-d391-4572-aadc-a05524d2b134\") " pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:07 crc kubenswrapper[4731]: I1125 15:19:07.249301 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a2a194-d391-4572-aadc-a05524d2b134-utilities\") pod \"certified-operators-wnb2m\" (UID: \"89a2a194-d391-4572-aadc-a05524d2b134\") " pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:07 crc kubenswrapper[4731]: I1125 15:19:07.249331 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ndqr\" (UniqueName: \"kubernetes.io/projected/89a2a194-d391-4572-aadc-a05524d2b134-kube-api-access-4ndqr\") pod \"certified-operators-wnb2m\" (UID: \"89a2a194-d391-4572-aadc-a05524d2b134\") " pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:07 crc kubenswrapper[4731]: I1125 15:19:07.249919 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a2a194-d391-4572-aadc-a05524d2b134-utilities\") pod \"certified-operators-wnb2m\" (UID: \"89a2a194-d391-4572-aadc-a05524d2b134\") " pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:07 crc kubenswrapper[4731]: I1125 15:19:07.249967 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a2a194-d391-4572-aadc-a05524d2b134-catalog-content\") pod \"certified-operators-wnb2m\" (UID: \"89a2a194-d391-4572-aadc-a05524d2b134\") " pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:07 crc kubenswrapper[4731]: I1125 15:19:07.269481 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ndqr\" (UniqueName: \"kubernetes.io/projected/89a2a194-d391-4572-aadc-a05524d2b134-kube-api-access-4ndqr\") pod \"certified-operators-wnb2m\" (UID: \"89a2a194-d391-4572-aadc-a05524d2b134\") " pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:07 crc kubenswrapper[4731]: I1125 15:19:07.353492 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:07 crc kubenswrapper[4731]: I1125 15:19:07.892166 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wnb2m"] Nov 25 15:19:07 crc kubenswrapper[4731]: W1125 15:19:07.901003 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89a2a194_d391_4572_aadc_a05524d2b134.slice/crio-ccec258a7bdda783b5902f7967cb81e7703e2a72cd97736d45ff0f6c59349433 WatchSource:0}: Error finding container ccec258a7bdda783b5902f7967cb81e7703e2a72cd97736d45ff0f6c59349433: Status 404 returned error can't find the container with id ccec258a7bdda783b5902f7967cb81e7703e2a72cd97736d45ff0f6c59349433 Nov 25 15:19:08 crc kubenswrapper[4731]: I1125 15:19:08.585052 4731 generic.go:334] "Generic (PLEG): container finished" podID="89a2a194-d391-4572-aadc-a05524d2b134" containerID="274a42f3c5c014c9d3a31a7a5c5fb9cd5625e93fd58f68b5f45ec3a4954de268" exitCode=0 Nov 25 15:19:08 crc kubenswrapper[4731]: I1125 15:19:08.585127 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnb2m" event={"ID":"89a2a194-d391-4572-aadc-a05524d2b134","Type":"ContainerDied","Data":"274a42f3c5c014c9d3a31a7a5c5fb9cd5625e93fd58f68b5f45ec3a4954de268"} Nov 25 15:19:08 crc kubenswrapper[4731]: I1125 15:19:08.585386 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnb2m" event={"ID":"89a2a194-d391-4572-aadc-a05524d2b134","Type":"ContainerStarted","Data":"ccec258a7bdda783b5902f7967cb81e7703e2a72cd97736d45ff0f6c59349433"} Nov 25 15:19:10 crc kubenswrapper[4731]: I1125 15:19:10.606578 4731 generic.go:334] "Generic (PLEG): container finished" podID="89a2a194-d391-4572-aadc-a05524d2b134" containerID="39fb5890067eaf27d31741ea747f210753d9df3c24c3d2f67177ff2da259c202" exitCode=0 Nov 25 15:19:10 crc kubenswrapper[4731]: I1125 15:19:10.606741 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnb2m" event={"ID":"89a2a194-d391-4572-aadc-a05524d2b134","Type":"ContainerDied","Data":"39fb5890067eaf27d31741ea747f210753d9df3c24c3d2f67177ff2da259c202"} Nov 25 15:19:11 crc kubenswrapper[4731]: I1125 15:19:11.617835 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnb2m" event={"ID":"89a2a194-d391-4572-aadc-a05524d2b134","Type":"ContainerStarted","Data":"7b88808e432bfa6b40f7abc09c4aee8d0755d2cf661bbc5d19aa8e9040896b59"} Nov 25 15:19:11 crc kubenswrapper[4731]: I1125 15:19:11.646444 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wnb2m" podStartSLOduration=2.180084945 podStartE2EDuration="4.646420985s" podCreationTimestamp="2025-11-25 15:19:07 +0000 UTC" firstStartedPulling="2025-11-25 15:19:08.588775321 +0000 UTC m=+1507.335909751" lastFinishedPulling="2025-11-25 15:19:11.055111361 +0000 UTC m=+1509.802245791" observedRunningTime="2025-11-25 15:19:11.635777256 +0000 UTC m=+1510.382911696" watchObservedRunningTime="2025-11-25 15:19:11.646420985 +0000 UTC m=+1510.393555415" Nov 25 15:19:12 crc kubenswrapper[4731]: I1125 15:19:12.326223 4731 scope.go:117] "RemoveContainer" containerID="ed15a7fdf87593dced3dde7d60cfe24fc4eb81538e7f21e59b6581a2718cafac" Nov 25 15:19:12 crc kubenswrapper[4731]: I1125 15:19:12.350187 4731 scope.go:117] "RemoveContainer" containerID="00b56dd0635be08180db3060204a4c0ee849da658f95acbef6f56a58db7203a4" Nov 25 15:19:12 crc kubenswrapper[4731]: I1125 15:19:12.406739 4731 scope.go:117] "RemoveContainer" containerID="dfb7a653d9b0aed12a37e808f87152651b4cb9e6ffa1260c9ce1d3fcfaf1ff33" Nov 25 15:19:12 crc kubenswrapper[4731]: I1125 15:19:12.457812 4731 scope.go:117] "RemoveContainer" containerID="c25029d63b4507ef0dddccf794c9a35c08e26ed37d06ab8a559e3f1a839f16db" Nov 25 15:19:12 crc kubenswrapper[4731]: I1125 15:19:12.583169 4731 scope.go:117] "RemoveContainer" containerID="42e560f7f6a4e0f9ab28f351b9e4673464cdf162134df24280e8bb276d521836" Nov 25 15:19:12 crc kubenswrapper[4731]: I1125 15:19:12.614405 4731 scope.go:117] "RemoveContainer" containerID="2909c1e0a65b5c4a71509df9879d5e4d7f5c19a479a9c96f68a0c1731d774a53" Nov 25 15:19:12 crc kubenswrapper[4731]: I1125 15:19:12.711692 4731 scope.go:117] "RemoveContainer" containerID="74ef5049b547c2aff5288789bb966aa69b75cc19cbd4b5604e632069a1e9d754" Nov 25 15:19:12 crc kubenswrapper[4731]: I1125 15:19:12.752896 4731 scope.go:117] "RemoveContainer" containerID="e02fdcbeaba1fb4ae6a3f06519f1d980669d537440993706c27fc96737748eec" Nov 25 15:19:14 crc kubenswrapper[4731]: I1125 15:19:14.669799 4731 generic.go:334] "Generic (PLEG): container finished" podID="35e801a7-1cfb-402e-9194-9e25042d0d85" containerID="105b2eb61271ea34caa94265c5e1c5e422ae27f720355bac12c4847d72124474" exitCode=0 Nov 25 15:19:14 crc kubenswrapper[4731]: I1125 15:19:14.669851 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" event={"ID":"35e801a7-1cfb-402e-9194-9e25042d0d85","Type":"ContainerDied","Data":"105b2eb61271ea34caa94265c5e1c5e422ae27f720355bac12c4847d72124474"} Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.151342 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.243115 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-ssh-key\") pod \"35e801a7-1cfb-402e-9194-9e25042d0d85\" (UID: \"35e801a7-1cfb-402e-9194-9e25042d0d85\") " Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.243210 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-inventory\") pod \"35e801a7-1cfb-402e-9194-9e25042d0d85\" (UID: \"35e801a7-1cfb-402e-9194-9e25042d0d85\") " Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.243281 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwxt8\" (UniqueName: \"kubernetes.io/projected/35e801a7-1cfb-402e-9194-9e25042d0d85-kube-api-access-mwxt8\") pod \"35e801a7-1cfb-402e-9194-9e25042d0d85\" (UID: \"35e801a7-1cfb-402e-9194-9e25042d0d85\") " Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.243347 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-repo-setup-combined-ca-bundle\") pod \"35e801a7-1cfb-402e-9194-9e25042d0d85\" (UID: \"35e801a7-1cfb-402e-9194-9e25042d0d85\") " Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.256931 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35e801a7-1cfb-402e-9194-9e25042d0d85-kube-api-access-mwxt8" (OuterVolumeSpecName: "kube-api-access-mwxt8") pod "35e801a7-1cfb-402e-9194-9e25042d0d85" (UID: "35e801a7-1cfb-402e-9194-9e25042d0d85"). InnerVolumeSpecName "kube-api-access-mwxt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.257080 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "35e801a7-1cfb-402e-9194-9e25042d0d85" (UID: "35e801a7-1cfb-402e-9194-9e25042d0d85"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.272146 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "35e801a7-1cfb-402e-9194-9e25042d0d85" (UID: "35e801a7-1cfb-402e-9194-9e25042d0d85"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.282868 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-inventory" (OuterVolumeSpecName: "inventory") pod "35e801a7-1cfb-402e-9194-9e25042d0d85" (UID: "35e801a7-1cfb-402e-9194-9e25042d0d85"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.345510 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwxt8\" (UniqueName: \"kubernetes.io/projected/35e801a7-1cfb-402e-9194-9e25042d0d85-kube-api-access-mwxt8\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.345577 4731 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.345594 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.345605 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/35e801a7-1cfb-402e-9194-9e25042d0d85-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.709102 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" event={"ID":"35e801a7-1cfb-402e-9194-9e25042d0d85","Type":"ContainerDied","Data":"e45c428dd5b0f70db2c0bdb9b185201f368d4a2268cfc9d6520b12a6f44c9d0b"} Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.709148 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e45c428dd5b0f70db2c0bdb9b185201f368d4a2268cfc9d6520b12a6f44c9d0b" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.709651 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.768885 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc"] Nov 25 15:19:16 crc kubenswrapper[4731]: E1125 15:19:16.769302 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35e801a7-1cfb-402e-9194-9e25042d0d85" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.769319 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="35e801a7-1cfb-402e-9194-9e25042d0d85" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.769516 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="35e801a7-1cfb-402e-9194-9e25042d0d85" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.770191 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.781695 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.782415 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.784691 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.784864 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.786696 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc"] Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.856128 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e21c7c5-3531-4470-b2d5-a57c748b4add-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h9bbc\" (UID: \"4e21c7c5-3531-4470-b2d5-a57c748b4add\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.856240 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e21c7c5-3531-4470-b2d5-a57c748b4add-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h9bbc\" (UID: \"4e21c7c5-3531-4470-b2d5-a57c748b4add\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.856440 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jtnq\" (UniqueName: \"kubernetes.io/projected/4e21c7c5-3531-4470-b2d5-a57c748b4add-kube-api-access-4jtnq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h9bbc\" (UID: \"4e21c7c5-3531-4470-b2d5-a57c748b4add\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.958512 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e21c7c5-3531-4470-b2d5-a57c748b4add-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h9bbc\" (UID: \"4e21c7c5-3531-4470-b2d5-a57c748b4add\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.958593 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e21c7c5-3531-4470-b2d5-a57c748b4add-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h9bbc\" (UID: \"4e21c7c5-3531-4470-b2d5-a57c748b4add\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.958686 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jtnq\" (UniqueName: \"kubernetes.io/projected/4e21c7c5-3531-4470-b2d5-a57c748b4add-kube-api-access-4jtnq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h9bbc\" (UID: \"4e21c7c5-3531-4470-b2d5-a57c748b4add\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.964264 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e21c7c5-3531-4470-b2d5-a57c748b4add-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h9bbc\" (UID: \"4e21c7c5-3531-4470-b2d5-a57c748b4add\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.967448 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e21c7c5-3531-4470-b2d5-a57c748b4add-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h9bbc\" (UID: \"4e21c7c5-3531-4470-b2d5-a57c748b4add\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" Nov 25 15:19:16 crc kubenswrapper[4731]: I1125 15:19:16.978775 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jtnq\" (UniqueName: \"kubernetes.io/projected/4e21c7c5-3531-4470-b2d5-a57c748b4add-kube-api-access-4jtnq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-h9bbc\" (UID: \"4e21c7c5-3531-4470-b2d5-a57c748b4add\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" Nov 25 15:19:17 crc kubenswrapper[4731]: I1125 15:19:17.094671 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" Nov 25 15:19:17 crc kubenswrapper[4731]: I1125 15:19:17.354383 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:17 crc kubenswrapper[4731]: I1125 15:19:17.354643 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:17 crc kubenswrapper[4731]: I1125 15:19:17.418103 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:17 crc kubenswrapper[4731]: I1125 15:19:17.675858 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc"] Nov 25 15:19:17 crc kubenswrapper[4731]: I1125 15:19:17.719404 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" event={"ID":"4e21c7c5-3531-4470-b2d5-a57c748b4add","Type":"ContainerStarted","Data":"8421b576d9b90138f002bd77b3e9c95de228aaa04644a74ad55c2c72519b0cd8"} Nov 25 15:19:17 crc kubenswrapper[4731]: I1125 15:19:17.769142 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:17 crc kubenswrapper[4731]: I1125 15:19:17.816624 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wnb2m"] Nov 25 15:19:18 crc kubenswrapper[4731]: I1125 15:19:18.729071 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" event={"ID":"4e21c7c5-3531-4470-b2d5-a57c748b4add","Type":"ContainerStarted","Data":"ad11a80659a2732ba078f83f1e64c238fe319fa3bcfb37be7bbf8646b39dc94c"} Nov 25 15:19:18 crc kubenswrapper[4731]: I1125 15:19:18.752766 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" podStartSLOduration=2.106268781 podStartE2EDuration="2.75274014s" podCreationTimestamp="2025-11-25 15:19:16 +0000 UTC" firstStartedPulling="2025-11-25 15:19:17.6818129 +0000 UTC m=+1516.428947330" lastFinishedPulling="2025-11-25 15:19:18.328284259 +0000 UTC m=+1517.075418689" observedRunningTime="2025-11-25 15:19:18.749922354 +0000 UTC m=+1517.497056804" watchObservedRunningTime="2025-11-25 15:19:18.75274014 +0000 UTC m=+1517.499874570" Nov 25 15:19:19 crc kubenswrapper[4731]: I1125 15:19:19.742337 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wnb2m" podUID="89a2a194-d391-4572-aadc-a05524d2b134" containerName="registry-server" containerID="cri-o://7b88808e432bfa6b40f7abc09c4aee8d0755d2cf661bbc5d19aa8e9040896b59" gracePeriod=2 Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.039858 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.039936 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.692134 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.753167 4731 generic.go:334] "Generic (PLEG): container finished" podID="89a2a194-d391-4572-aadc-a05524d2b134" containerID="7b88808e432bfa6b40f7abc09c4aee8d0755d2cf661bbc5d19aa8e9040896b59" exitCode=0 Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.753229 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnb2m" event={"ID":"89a2a194-d391-4572-aadc-a05524d2b134","Type":"ContainerDied","Data":"7b88808e432bfa6b40f7abc09c4aee8d0755d2cf661bbc5d19aa8e9040896b59"} Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.753243 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wnb2m" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.753275 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wnb2m" event={"ID":"89a2a194-d391-4572-aadc-a05524d2b134","Type":"ContainerDied","Data":"ccec258a7bdda783b5902f7967cb81e7703e2a72cd97736d45ff0f6c59349433"} Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.753306 4731 scope.go:117] "RemoveContainer" containerID="7b88808e432bfa6b40f7abc09c4aee8d0755d2cf661bbc5d19aa8e9040896b59" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.778901 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a2a194-d391-4572-aadc-a05524d2b134-utilities\") pod \"89a2a194-d391-4572-aadc-a05524d2b134\" (UID: \"89a2a194-d391-4572-aadc-a05524d2b134\") " Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.779068 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ndqr\" (UniqueName: \"kubernetes.io/projected/89a2a194-d391-4572-aadc-a05524d2b134-kube-api-access-4ndqr\") pod \"89a2a194-d391-4572-aadc-a05524d2b134\" (UID: \"89a2a194-d391-4572-aadc-a05524d2b134\") " Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.779111 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a2a194-d391-4572-aadc-a05524d2b134-catalog-content\") pod \"89a2a194-d391-4572-aadc-a05524d2b134\" (UID: \"89a2a194-d391-4572-aadc-a05524d2b134\") " Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.781032 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a2a194-d391-4572-aadc-a05524d2b134-utilities" (OuterVolumeSpecName: "utilities") pod "89a2a194-d391-4572-aadc-a05524d2b134" (UID: "89a2a194-d391-4572-aadc-a05524d2b134"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.783094 4731 scope.go:117] "RemoveContainer" containerID="39fb5890067eaf27d31741ea747f210753d9df3c24c3d2f67177ff2da259c202" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.796658 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89a2a194-d391-4572-aadc-a05524d2b134-kube-api-access-4ndqr" (OuterVolumeSpecName: "kube-api-access-4ndqr") pod "89a2a194-d391-4572-aadc-a05524d2b134" (UID: "89a2a194-d391-4572-aadc-a05524d2b134"). InnerVolumeSpecName "kube-api-access-4ndqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.835066 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89a2a194-d391-4572-aadc-a05524d2b134-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89a2a194-d391-4572-aadc-a05524d2b134" (UID: "89a2a194-d391-4572-aadc-a05524d2b134"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.864594 4731 scope.go:117] "RemoveContainer" containerID="274a42f3c5c014c9d3a31a7a5c5fb9cd5625e93fd58f68b5f45ec3a4954de268" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.881745 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89a2a194-d391-4572-aadc-a05524d2b134-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.881782 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ndqr\" (UniqueName: \"kubernetes.io/projected/89a2a194-d391-4572-aadc-a05524d2b134-kube-api-access-4ndqr\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.881793 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89a2a194-d391-4572-aadc-a05524d2b134-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.921485 4731 scope.go:117] "RemoveContainer" containerID="7b88808e432bfa6b40f7abc09c4aee8d0755d2cf661bbc5d19aa8e9040896b59" Nov 25 15:19:20 crc kubenswrapper[4731]: E1125 15:19:20.921934 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b88808e432bfa6b40f7abc09c4aee8d0755d2cf661bbc5d19aa8e9040896b59\": container with ID starting with 7b88808e432bfa6b40f7abc09c4aee8d0755d2cf661bbc5d19aa8e9040896b59 not found: ID does not exist" containerID="7b88808e432bfa6b40f7abc09c4aee8d0755d2cf661bbc5d19aa8e9040896b59" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.921972 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b88808e432bfa6b40f7abc09c4aee8d0755d2cf661bbc5d19aa8e9040896b59"} err="failed to get container status \"7b88808e432bfa6b40f7abc09c4aee8d0755d2cf661bbc5d19aa8e9040896b59\": rpc error: code = NotFound desc = could not find container \"7b88808e432bfa6b40f7abc09c4aee8d0755d2cf661bbc5d19aa8e9040896b59\": container with ID starting with 7b88808e432bfa6b40f7abc09c4aee8d0755d2cf661bbc5d19aa8e9040896b59 not found: ID does not exist" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.921993 4731 scope.go:117] "RemoveContainer" containerID="39fb5890067eaf27d31741ea747f210753d9df3c24c3d2f67177ff2da259c202" Nov 25 15:19:20 crc kubenswrapper[4731]: E1125 15:19:20.922515 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39fb5890067eaf27d31741ea747f210753d9df3c24c3d2f67177ff2da259c202\": container with ID starting with 39fb5890067eaf27d31741ea747f210753d9df3c24c3d2f67177ff2da259c202 not found: ID does not exist" containerID="39fb5890067eaf27d31741ea747f210753d9df3c24c3d2f67177ff2da259c202" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.922536 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39fb5890067eaf27d31741ea747f210753d9df3c24c3d2f67177ff2da259c202"} err="failed to get container status \"39fb5890067eaf27d31741ea747f210753d9df3c24c3d2f67177ff2da259c202\": rpc error: code = NotFound desc = could not find container \"39fb5890067eaf27d31741ea747f210753d9df3c24c3d2f67177ff2da259c202\": container with ID starting with 39fb5890067eaf27d31741ea747f210753d9df3c24c3d2f67177ff2da259c202 not found: ID does not exist" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.922552 4731 scope.go:117] "RemoveContainer" containerID="274a42f3c5c014c9d3a31a7a5c5fb9cd5625e93fd58f68b5f45ec3a4954de268" Nov 25 15:19:20 crc kubenswrapper[4731]: E1125 15:19:20.923437 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"274a42f3c5c014c9d3a31a7a5c5fb9cd5625e93fd58f68b5f45ec3a4954de268\": container with ID starting with 274a42f3c5c014c9d3a31a7a5c5fb9cd5625e93fd58f68b5f45ec3a4954de268 not found: ID does not exist" containerID="274a42f3c5c014c9d3a31a7a5c5fb9cd5625e93fd58f68b5f45ec3a4954de268" Nov 25 15:19:20 crc kubenswrapper[4731]: I1125 15:19:20.923466 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"274a42f3c5c014c9d3a31a7a5c5fb9cd5625e93fd58f68b5f45ec3a4954de268"} err="failed to get container status \"274a42f3c5c014c9d3a31a7a5c5fb9cd5625e93fd58f68b5f45ec3a4954de268\": rpc error: code = NotFound desc = could not find container \"274a42f3c5c014c9d3a31a7a5c5fb9cd5625e93fd58f68b5f45ec3a4954de268\": container with ID starting with 274a42f3c5c014c9d3a31a7a5c5fb9cd5625e93fd58f68b5f45ec3a4954de268 not found: ID does not exist" Nov 25 15:19:21 crc kubenswrapper[4731]: I1125 15:19:21.095212 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wnb2m"] Nov 25 15:19:21 crc kubenswrapper[4731]: I1125 15:19:21.107125 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wnb2m"] Nov 25 15:19:21 crc kubenswrapper[4731]: I1125 15:19:21.773656 4731 generic.go:334] "Generic (PLEG): container finished" podID="4e21c7c5-3531-4470-b2d5-a57c748b4add" containerID="ad11a80659a2732ba078f83f1e64c238fe319fa3bcfb37be7bbf8646b39dc94c" exitCode=0 Nov 25 15:19:21 crc kubenswrapper[4731]: I1125 15:19:21.773805 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" event={"ID":"4e21c7c5-3531-4470-b2d5-a57c748b4add","Type":"ContainerDied","Data":"ad11a80659a2732ba078f83f1e64c238fe319fa3bcfb37be7bbf8646b39dc94c"} Nov 25 15:19:22 crc kubenswrapper[4731]: I1125 15:19:22.827279 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89a2a194-d391-4572-aadc-a05524d2b134" path="/var/lib/kubelet/pods/89a2a194-d391-4572-aadc-a05524d2b134/volumes" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.199599 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.226406 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e21c7c5-3531-4470-b2d5-a57c748b4add-inventory\") pod \"4e21c7c5-3531-4470-b2d5-a57c748b4add\" (UID: \"4e21c7c5-3531-4470-b2d5-a57c748b4add\") " Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.226561 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jtnq\" (UniqueName: \"kubernetes.io/projected/4e21c7c5-3531-4470-b2d5-a57c748b4add-kube-api-access-4jtnq\") pod \"4e21c7c5-3531-4470-b2d5-a57c748b4add\" (UID: \"4e21c7c5-3531-4470-b2d5-a57c748b4add\") " Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.226599 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e21c7c5-3531-4470-b2d5-a57c748b4add-ssh-key\") pod \"4e21c7c5-3531-4470-b2d5-a57c748b4add\" (UID: \"4e21c7c5-3531-4470-b2d5-a57c748b4add\") " Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.235464 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e21c7c5-3531-4470-b2d5-a57c748b4add-kube-api-access-4jtnq" (OuterVolumeSpecName: "kube-api-access-4jtnq") pod "4e21c7c5-3531-4470-b2d5-a57c748b4add" (UID: "4e21c7c5-3531-4470-b2d5-a57c748b4add"). InnerVolumeSpecName "kube-api-access-4jtnq". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.266022 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e21c7c5-3531-4470-b2d5-a57c748b4add-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4e21c7c5-3531-4470-b2d5-a57c748b4add" (UID: "4e21c7c5-3531-4470-b2d5-a57c748b4add"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.279756 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e21c7c5-3531-4470-b2d5-a57c748b4add-inventory" (OuterVolumeSpecName: "inventory") pod "4e21c7c5-3531-4470-b2d5-a57c748b4add" (UID: "4e21c7c5-3531-4470-b2d5-a57c748b4add"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.329145 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4e21c7c5-3531-4470-b2d5-a57c748b4add-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.329197 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jtnq\" (UniqueName: \"kubernetes.io/projected/4e21c7c5-3531-4470-b2d5-a57c748b4add-kube-api-access-4jtnq\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.329212 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4e21c7c5-3531-4470-b2d5-a57c748b4add-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.795525 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" event={"ID":"4e21c7c5-3531-4470-b2d5-a57c748b4add","Type":"ContainerDied","Data":"8421b576d9b90138f002bd77b3e9c95de228aaa04644a74ad55c2c72519b0cd8"} Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.795882 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8421b576d9b90138f002bd77b3e9c95de228aaa04644a74ad55c2c72519b0cd8" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.795597 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-h9bbc" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.876390 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf"] Nov 25 15:19:23 crc kubenswrapper[4731]: E1125 15:19:23.876980 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a2a194-d391-4572-aadc-a05524d2b134" containerName="extract-utilities" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.876999 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a2a194-d391-4572-aadc-a05524d2b134" containerName="extract-utilities" Nov 25 15:19:23 crc kubenswrapper[4731]: E1125 15:19:23.877020 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e21c7c5-3531-4470-b2d5-a57c748b4add" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.877028 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e21c7c5-3531-4470-b2d5-a57c748b4add" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 15:19:23 crc kubenswrapper[4731]: E1125 15:19:23.877058 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a2a194-d391-4572-aadc-a05524d2b134" containerName="extract-content" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.877068 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a2a194-d391-4572-aadc-a05524d2b134" containerName="extract-content" Nov 25 15:19:23 crc kubenswrapper[4731]: E1125 15:19:23.877099 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89a2a194-d391-4572-aadc-a05524d2b134" containerName="registry-server" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.877107 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="89a2a194-d391-4572-aadc-a05524d2b134" containerName="registry-server" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.877339 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="89a2a194-d391-4572-aadc-a05524d2b134" containerName="registry-server" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.877371 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e21c7c5-3531-4470-b2d5-a57c748b4add" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.878363 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.880833 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.881127 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.885174 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.886457 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:19:23 crc kubenswrapper[4731]: I1125 15:19:23.893566 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf"] Nov 25 15:19:24 crc kubenswrapper[4731]: I1125 15:19:24.050747 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf\" (UID: \"141271e2-9df3-449f-9590-6d7c43c8563b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" Nov 25 15:19:24 crc kubenswrapper[4731]: I1125 15:19:24.051014 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf\" (UID: \"141271e2-9df3-449f-9590-6d7c43c8563b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" Nov 25 15:19:24 crc kubenswrapper[4731]: I1125 15:19:24.051081 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf\" (UID: \"141271e2-9df3-449f-9590-6d7c43c8563b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" Nov 25 15:19:24 crc kubenswrapper[4731]: I1125 15:19:24.051270 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4v8z\" (UniqueName: \"kubernetes.io/projected/141271e2-9df3-449f-9590-6d7c43c8563b-kube-api-access-p4v8z\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf\" (UID: \"141271e2-9df3-449f-9590-6d7c43c8563b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" Nov 25 15:19:24 crc kubenswrapper[4731]: I1125 15:19:24.152658 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf\" (UID: \"141271e2-9df3-449f-9590-6d7c43c8563b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" Nov 25 15:19:24 crc kubenswrapper[4731]: I1125 15:19:24.152710 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf\" (UID: \"141271e2-9df3-449f-9590-6d7c43c8563b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" Nov 25 15:19:24 crc kubenswrapper[4731]: I1125 15:19:24.152760 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4v8z\" (UniqueName: \"kubernetes.io/projected/141271e2-9df3-449f-9590-6d7c43c8563b-kube-api-access-p4v8z\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf\" (UID: \"141271e2-9df3-449f-9590-6d7c43c8563b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" Nov 25 15:19:24 crc kubenswrapper[4731]: I1125 15:19:24.153118 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf\" (UID: \"141271e2-9df3-449f-9590-6d7c43c8563b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" Nov 25 15:19:24 crc kubenswrapper[4731]: I1125 15:19:24.159143 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf\" (UID: \"141271e2-9df3-449f-9590-6d7c43c8563b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" Nov 25 15:19:24 crc kubenswrapper[4731]: I1125 15:19:24.159712 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf\" (UID: \"141271e2-9df3-449f-9590-6d7c43c8563b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" Nov 25 15:19:24 crc kubenswrapper[4731]: I1125 15:19:24.169095 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf\" (UID: \"141271e2-9df3-449f-9590-6d7c43c8563b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" Nov 25 15:19:24 crc kubenswrapper[4731]: I1125 15:19:24.177703 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4v8z\" (UniqueName: \"kubernetes.io/projected/141271e2-9df3-449f-9590-6d7c43c8563b-kube-api-access-p4v8z\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf\" (UID: \"141271e2-9df3-449f-9590-6d7c43c8563b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" Nov 25 15:19:24 crc kubenswrapper[4731]: I1125 15:19:24.199981 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" Nov 25 15:19:24 crc kubenswrapper[4731]: I1125 15:19:24.803068 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf"] Nov 25 15:19:25 crc kubenswrapper[4731]: I1125 15:19:25.817921 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" event={"ID":"141271e2-9df3-449f-9590-6d7c43c8563b","Type":"ContainerStarted","Data":"458ddfe2f0d51a374f8ea6a1a3bcf7281d767a232e7e32fa67e814b9603f43f7"} Nov 25 15:19:26 crc kubenswrapper[4731]: I1125 15:19:26.849022 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" event={"ID":"141271e2-9df3-449f-9590-6d7c43c8563b","Type":"ContainerStarted","Data":"3e0a25e5b707b2c4867451589f735ddc1758b45fad13c38b36d352245f48bdda"} Nov 25 15:19:26 crc kubenswrapper[4731]: I1125 15:19:26.876215 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" podStartSLOduration=3.028234453 podStartE2EDuration="3.876193112s" podCreationTimestamp="2025-11-25 15:19:23 +0000 UTC" firstStartedPulling="2025-11-25 15:19:24.80333269 +0000 UTC m=+1523.550467120" lastFinishedPulling="2025-11-25 15:19:25.651291359 +0000 UTC m=+1524.398425779" observedRunningTime="2025-11-25 15:19:26.871195885 +0000 UTC m=+1525.618330315" watchObservedRunningTime="2025-11-25 15:19:26.876193112 +0000 UTC m=+1525.623327542" Nov 25 15:19:50 crc kubenswrapper[4731]: I1125 15:19:50.040781 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:19:50 crc kubenswrapper[4731]: I1125 15:19:50.041868 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:20:13 crc kubenswrapper[4731]: I1125 15:20:13.050962 4731 scope.go:117] "RemoveContainer" containerID="4b82b717e6e11a23fd4ed5ede1cf6036a1585f3e11660179b59a0e77aceb8aac" Nov 25 15:20:13 crc kubenswrapper[4731]: I1125 15:20:13.077733 4731 scope.go:117] "RemoveContainer" containerID="f50fff902b1f5357488d2fee4492fb5988c0da38a001ab51bf7da633a84af0d5" Nov 25 15:20:13 crc kubenswrapper[4731]: I1125 15:20:13.118399 4731 scope.go:117] "RemoveContainer" containerID="351cec3bc637851ecfcb7e696eae85cb1280812e1ccb194b5f480b514ef89fd8" Nov 25 15:20:20 crc kubenswrapper[4731]: I1125 15:20:20.040388 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:20:20 crc kubenswrapper[4731]: I1125 15:20:20.041032 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:20:20 crc kubenswrapper[4731]: I1125 15:20:20.041082 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 15:20:20 crc kubenswrapper[4731]: I1125 15:20:20.041945 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:20:20 crc kubenswrapper[4731]: I1125 15:20:20.042062 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" gracePeriod=600 Nov 25 15:20:20 crc kubenswrapper[4731]: I1125 15:20:20.393302 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" exitCode=0 Nov 25 15:20:20 crc kubenswrapper[4731]: I1125 15:20:20.393604 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8"} Nov 25 15:20:20 crc kubenswrapper[4731]: I1125 15:20:20.393636 4731 scope.go:117] "RemoveContainer" containerID="bf03c5dc783d5b39c33c89b81293bca28d3310abcd7786c9974eab3a74d91a80" Nov 25 15:20:20 crc kubenswrapper[4731]: E1125 15:20:20.877543 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:20:21 crc kubenswrapper[4731]: I1125 15:20:21.406849 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:20:21 crc kubenswrapper[4731]: E1125 15:20:21.407413 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:20:32 crc kubenswrapper[4731]: I1125 15:20:32.824710 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pvmmv"] Nov 25 15:20:32 crc kubenswrapper[4731]: I1125 15:20:32.828369 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:32 crc kubenswrapper[4731]: I1125 15:20:32.849401 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvmmv"] Nov 25 15:20:32 crc kubenswrapper[4731]: I1125 15:20:32.941088 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb2mh\" (UniqueName: \"kubernetes.io/projected/93db4c1a-26df-492e-bf72-4fa686e1a46a-kube-api-access-vb2mh\") pod \"redhat-marketplace-pvmmv\" (UID: \"93db4c1a-26df-492e-bf72-4fa686e1a46a\") " pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:32 crc kubenswrapper[4731]: I1125 15:20:32.941404 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93db4c1a-26df-492e-bf72-4fa686e1a46a-catalog-content\") pod \"redhat-marketplace-pvmmv\" (UID: \"93db4c1a-26df-492e-bf72-4fa686e1a46a\") " pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:32 crc kubenswrapper[4731]: I1125 15:20:32.941455 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93db4c1a-26df-492e-bf72-4fa686e1a46a-utilities\") pod \"redhat-marketplace-pvmmv\" (UID: \"93db4c1a-26df-492e-bf72-4fa686e1a46a\") " pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:33 crc kubenswrapper[4731]: I1125 15:20:33.044757 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93db4c1a-26df-492e-bf72-4fa686e1a46a-catalog-content\") pod \"redhat-marketplace-pvmmv\" (UID: \"93db4c1a-26df-492e-bf72-4fa686e1a46a\") " pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:33 crc kubenswrapper[4731]: I1125 15:20:33.044835 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93db4c1a-26df-492e-bf72-4fa686e1a46a-utilities\") pod \"redhat-marketplace-pvmmv\" (UID: \"93db4c1a-26df-492e-bf72-4fa686e1a46a\") " pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:33 crc kubenswrapper[4731]: I1125 15:20:33.044979 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb2mh\" (UniqueName: \"kubernetes.io/projected/93db4c1a-26df-492e-bf72-4fa686e1a46a-kube-api-access-vb2mh\") pod \"redhat-marketplace-pvmmv\" (UID: \"93db4c1a-26df-492e-bf72-4fa686e1a46a\") " pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:33 crc kubenswrapper[4731]: I1125 15:20:33.045303 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93db4c1a-26df-492e-bf72-4fa686e1a46a-catalog-content\") pod \"redhat-marketplace-pvmmv\" (UID: \"93db4c1a-26df-492e-bf72-4fa686e1a46a\") " pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:33 crc kubenswrapper[4731]: I1125 15:20:33.045349 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93db4c1a-26df-492e-bf72-4fa686e1a46a-utilities\") pod \"redhat-marketplace-pvmmv\" (UID: \"93db4c1a-26df-492e-bf72-4fa686e1a46a\") " pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:33 crc kubenswrapper[4731]: I1125 15:20:33.066220 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb2mh\" (UniqueName: \"kubernetes.io/projected/93db4c1a-26df-492e-bf72-4fa686e1a46a-kube-api-access-vb2mh\") pod \"redhat-marketplace-pvmmv\" (UID: \"93db4c1a-26df-492e-bf72-4fa686e1a46a\") " pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:33 crc kubenswrapper[4731]: I1125 15:20:33.149708 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:33 crc kubenswrapper[4731]: I1125 15:20:33.684192 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvmmv"] Nov 25 15:20:34 crc kubenswrapper[4731]: I1125 15:20:34.524769 4731 generic.go:334] "Generic (PLEG): container finished" podID="93db4c1a-26df-492e-bf72-4fa686e1a46a" containerID="0f8e4bcb2f28c45fb236b79e0f592a88ea22dcd0213339ea617e490e411cd425" exitCode=0 Nov 25 15:20:34 crc kubenswrapper[4731]: I1125 15:20:34.524857 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvmmv" event={"ID":"93db4c1a-26df-492e-bf72-4fa686e1a46a","Type":"ContainerDied","Data":"0f8e4bcb2f28c45fb236b79e0f592a88ea22dcd0213339ea617e490e411cd425"} Nov 25 15:20:34 crc kubenswrapper[4731]: I1125 15:20:34.525121 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvmmv" event={"ID":"93db4c1a-26df-492e-bf72-4fa686e1a46a","Type":"ContainerStarted","Data":"46c3b2646644945a82c435f947ca15694a37a62d333430d713d98c7992daf02c"} Nov 25 15:20:36 crc kubenswrapper[4731]: I1125 15:20:36.557168 4731 generic.go:334] "Generic (PLEG): container finished" podID="93db4c1a-26df-492e-bf72-4fa686e1a46a" containerID="040a693ef624c59304e56713aa03f34294b3849b0fada4390ba44f986e2dc1d1" exitCode=0 Nov 25 15:20:36 crc kubenswrapper[4731]: I1125 15:20:36.557660 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvmmv" event={"ID":"93db4c1a-26df-492e-bf72-4fa686e1a46a","Type":"ContainerDied","Data":"040a693ef624c59304e56713aa03f34294b3849b0fada4390ba44f986e2dc1d1"} Nov 25 15:20:36 crc kubenswrapper[4731]: I1125 15:20:36.812591 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:20:36 crc kubenswrapper[4731]: E1125 15:20:36.813413 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:20:37 crc kubenswrapper[4731]: I1125 15:20:37.571769 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvmmv" event={"ID":"93db4c1a-26df-492e-bf72-4fa686e1a46a","Type":"ContainerStarted","Data":"29af40aef49723bcf86a4e38d899b1ab9ee7918e7b7382d933af6a03b48d6ef7"} Nov 25 15:20:37 crc kubenswrapper[4731]: I1125 15:20:37.591327 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pvmmv" podStartSLOduration=2.7908486249999997 podStartE2EDuration="5.591303129s" podCreationTimestamp="2025-11-25 15:20:32 +0000 UTC" firstStartedPulling="2025-11-25 15:20:34.526767424 +0000 UTC m=+1593.273901854" lastFinishedPulling="2025-11-25 15:20:37.327221928 +0000 UTC m=+1596.074356358" observedRunningTime="2025-11-25 15:20:37.586932347 +0000 UTC m=+1596.334066777" watchObservedRunningTime="2025-11-25 15:20:37.591303129 +0000 UTC m=+1596.338437579" Nov 25 15:20:43 crc kubenswrapper[4731]: I1125 15:20:43.150110 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:43 crc kubenswrapper[4731]: I1125 15:20:43.150725 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:43 crc kubenswrapper[4731]: I1125 15:20:43.205059 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:43 crc kubenswrapper[4731]: I1125 15:20:43.677726 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:43 crc kubenswrapper[4731]: I1125 15:20:43.739902 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvmmv"] Nov 25 15:20:45 crc kubenswrapper[4731]: I1125 15:20:45.656635 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pvmmv" podUID="93db4c1a-26df-492e-bf72-4fa686e1a46a" containerName="registry-server" containerID="cri-o://29af40aef49723bcf86a4e38d899b1ab9ee7918e7b7382d933af6a03b48d6ef7" gracePeriod=2 Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.220508 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.393981 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb2mh\" (UniqueName: \"kubernetes.io/projected/93db4c1a-26df-492e-bf72-4fa686e1a46a-kube-api-access-vb2mh\") pod \"93db4c1a-26df-492e-bf72-4fa686e1a46a\" (UID: \"93db4c1a-26df-492e-bf72-4fa686e1a46a\") " Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.394157 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93db4c1a-26df-492e-bf72-4fa686e1a46a-catalog-content\") pod \"93db4c1a-26df-492e-bf72-4fa686e1a46a\" (UID: \"93db4c1a-26df-492e-bf72-4fa686e1a46a\") " Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.394250 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93db4c1a-26df-492e-bf72-4fa686e1a46a-utilities\") pod \"93db4c1a-26df-492e-bf72-4fa686e1a46a\" (UID: \"93db4c1a-26df-492e-bf72-4fa686e1a46a\") " Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.395710 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93db4c1a-26df-492e-bf72-4fa686e1a46a-utilities" (OuterVolumeSpecName: "utilities") pod "93db4c1a-26df-492e-bf72-4fa686e1a46a" (UID: "93db4c1a-26df-492e-bf72-4fa686e1a46a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.399606 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93db4c1a-26df-492e-bf72-4fa686e1a46a-kube-api-access-vb2mh" (OuterVolumeSpecName: "kube-api-access-vb2mh") pod "93db4c1a-26df-492e-bf72-4fa686e1a46a" (UID: "93db4c1a-26df-492e-bf72-4fa686e1a46a"). InnerVolumeSpecName "kube-api-access-vb2mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.415605 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93db4c1a-26df-492e-bf72-4fa686e1a46a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93db4c1a-26df-492e-bf72-4fa686e1a46a" (UID: "93db4c1a-26df-492e-bf72-4fa686e1a46a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.496248 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb2mh\" (UniqueName: \"kubernetes.io/projected/93db4c1a-26df-492e-bf72-4fa686e1a46a-kube-api-access-vb2mh\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.496283 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93db4c1a-26df-492e-bf72-4fa686e1a46a-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.496294 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93db4c1a-26df-492e-bf72-4fa686e1a46a-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.669874 4731 generic.go:334] "Generic (PLEG): container finished" podID="93db4c1a-26df-492e-bf72-4fa686e1a46a" containerID="29af40aef49723bcf86a4e38d899b1ab9ee7918e7b7382d933af6a03b48d6ef7" exitCode=0 Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.669922 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvmmv" event={"ID":"93db4c1a-26df-492e-bf72-4fa686e1a46a","Type":"ContainerDied","Data":"29af40aef49723bcf86a4e38d899b1ab9ee7918e7b7382d933af6a03b48d6ef7"} Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.669953 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pvmmv" event={"ID":"93db4c1a-26df-492e-bf72-4fa686e1a46a","Type":"ContainerDied","Data":"46c3b2646644945a82c435f947ca15694a37a62d333430d713d98c7992daf02c"} Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.669955 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pvmmv" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.669971 4731 scope.go:117] "RemoveContainer" containerID="29af40aef49723bcf86a4e38d899b1ab9ee7918e7b7382d933af6a03b48d6ef7" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.697160 4731 scope.go:117] "RemoveContainer" containerID="040a693ef624c59304e56713aa03f34294b3849b0fada4390ba44f986e2dc1d1" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.705856 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvmmv"] Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.717850 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pvmmv"] Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.720565 4731 scope.go:117] "RemoveContainer" containerID="0f8e4bcb2f28c45fb236b79e0f592a88ea22dcd0213339ea617e490e411cd425" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.778343 4731 scope.go:117] "RemoveContainer" containerID="29af40aef49723bcf86a4e38d899b1ab9ee7918e7b7382d933af6a03b48d6ef7" Nov 25 15:20:46 crc kubenswrapper[4731]: E1125 15:20:46.778782 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29af40aef49723bcf86a4e38d899b1ab9ee7918e7b7382d933af6a03b48d6ef7\": container with ID starting with 29af40aef49723bcf86a4e38d899b1ab9ee7918e7b7382d933af6a03b48d6ef7 not found: ID does not exist" containerID="29af40aef49723bcf86a4e38d899b1ab9ee7918e7b7382d933af6a03b48d6ef7" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.778855 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29af40aef49723bcf86a4e38d899b1ab9ee7918e7b7382d933af6a03b48d6ef7"} err="failed to get container status \"29af40aef49723bcf86a4e38d899b1ab9ee7918e7b7382d933af6a03b48d6ef7\": rpc error: code = NotFound desc = could not find container \"29af40aef49723bcf86a4e38d899b1ab9ee7918e7b7382d933af6a03b48d6ef7\": container with ID starting with 29af40aef49723bcf86a4e38d899b1ab9ee7918e7b7382d933af6a03b48d6ef7 not found: ID does not exist" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.778893 4731 scope.go:117] "RemoveContainer" containerID="040a693ef624c59304e56713aa03f34294b3849b0fada4390ba44f986e2dc1d1" Nov 25 15:20:46 crc kubenswrapper[4731]: E1125 15:20:46.779205 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"040a693ef624c59304e56713aa03f34294b3849b0fada4390ba44f986e2dc1d1\": container with ID starting with 040a693ef624c59304e56713aa03f34294b3849b0fada4390ba44f986e2dc1d1 not found: ID does not exist" containerID="040a693ef624c59304e56713aa03f34294b3849b0fada4390ba44f986e2dc1d1" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.779244 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"040a693ef624c59304e56713aa03f34294b3849b0fada4390ba44f986e2dc1d1"} err="failed to get container status \"040a693ef624c59304e56713aa03f34294b3849b0fada4390ba44f986e2dc1d1\": rpc error: code = NotFound desc = could not find container \"040a693ef624c59304e56713aa03f34294b3849b0fada4390ba44f986e2dc1d1\": container with ID starting with 040a693ef624c59304e56713aa03f34294b3849b0fada4390ba44f986e2dc1d1 not found: ID does not exist" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.779267 4731 scope.go:117] "RemoveContainer" containerID="0f8e4bcb2f28c45fb236b79e0f592a88ea22dcd0213339ea617e490e411cd425" Nov 25 15:20:46 crc kubenswrapper[4731]: E1125 15:20:46.780024 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f8e4bcb2f28c45fb236b79e0f592a88ea22dcd0213339ea617e490e411cd425\": container with ID starting with 0f8e4bcb2f28c45fb236b79e0f592a88ea22dcd0213339ea617e490e411cd425 not found: ID does not exist" containerID="0f8e4bcb2f28c45fb236b79e0f592a88ea22dcd0213339ea617e490e411cd425" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.780078 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f8e4bcb2f28c45fb236b79e0f592a88ea22dcd0213339ea617e490e411cd425"} err="failed to get container status \"0f8e4bcb2f28c45fb236b79e0f592a88ea22dcd0213339ea617e490e411cd425\": rpc error: code = NotFound desc = could not find container \"0f8e4bcb2f28c45fb236b79e0f592a88ea22dcd0213339ea617e490e411cd425\": container with ID starting with 0f8e4bcb2f28c45fb236b79e0f592a88ea22dcd0213339ea617e490e411cd425 not found: ID does not exist" Nov 25 15:20:46 crc kubenswrapper[4731]: I1125 15:20:46.826264 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93db4c1a-26df-492e-bf72-4fa686e1a46a" path="/var/lib/kubelet/pods/93db4c1a-26df-492e-bf72-4fa686e1a46a/volumes" Nov 25 15:20:50 crc kubenswrapper[4731]: I1125 15:20:50.811371 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:20:50 crc kubenswrapper[4731]: E1125 15:20:50.812071 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:21:03 crc kubenswrapper[4731]: I1125 15:21:03.811502 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:21:03 crc kubenswrapper[4731]: E1125 15:21:03.813014 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:21:13 crc kubenswrapper[4731]: I1125 15:21:13.325438 4731 scope.go:117] "RemoveContainer" containerID="bd77b646d49c55c07df0920b8e18ea4db9dc2420e8dd045d73e01f7ab50dba77" Nov 25 15:21:13 crc kubenswrapper[4731]: I1125 15:21:13.357745 4731 scope.go:117] "RemoveContainer" containerID="2ad0348fa9586e8a3b34d4f98ad9de624e5eaa5e4f2103acae3d71db5d429e9a" Nov 25 15:21:15 crc kubenswrapper[4731]: I1125 15:21:15.811741 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:21:15 crc kubenswrapper[4731]: E1125 15:21:15.812488 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:21:27 crc kubenswrapper[4731]: I1125 15:21:27.812576 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:21:27 crc kubenswrapper[4731]: E1125 15:21:27.813410 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:21:39 crc kubenswrapper[4731]: I1125 15:21:39.812308 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:21:39 crc kubenswrapper[4731]: E1125 15:21:39.813356 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:21:50 crc kubenswrapper[4731]: I1125 15:21:50.812562 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:21:50 crc kubenswrapper[4731]: E1125 15:21:50.813536 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:22:05 crc kubenswrapper[4731]: I1125 15:22:05.811893 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:22:05 crc kubenswrapper[4731]: E1125 15:22:05.812694 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:22:13 crc kubenswrapper[4731]: I1125 15:22:13.450410 4731 scope.go:117] "RemoveContainer" containerID="e577c683896ade21424b1ce765b15d8267020a883c15b34b7945df10ce935d25" Nov 25 15:22:13 crc kubenswrapper[4731]: I1125 15:22:13.476569 4731 scope.go:117] "RemoveContainer" containerID="eeaadc579214b29c11e2558e57d40119c8b69ab2e4fea672e7f22e19ae55e684" Nov 25 15:22:13 crc kubenswrapper[4731]: I1125 15:22:13.507087 4731 scope.go:117] "RemoveContainer" containerID="b9f6295c7039316f3ccd467038e06fb7ebf1b2759c764118e92ebb0fdf3fa919" Nov 25 15:22:13 crc kubenswrapper[4731]: I1125 15:22:13.536338 4731 scope.go:117] "RemoveContainer" containerID="828a8305fd86b68e9ff1b98db3f2c1276cf5946587e14a526d7f431cb0c4851e" Nov 25 15:22:20 crc kubenswrapper[4731]: I1125 15:22:20.812168 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:22:20 crc kubenswrapper[4731]: E1125 15:22:20.812795 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:22:31 crc kubenswrapper[4731]: I1125 15:22:31.811306 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:22:31 crc kubenswrapper[4731]: E1125 15:22:31.811938 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:22:38 crc kubenswrapper[4731]: I1125 15:22:38.922624 4731 generic.go:334] "Generic (PLEG): container finished" podID="141271e2-9df3-449f-9590-6d7c43c8563b" containerID="3e0a25e5b707b2c4867451589f735ddc1758b45fad13c38b36d352245f48bdda" exitCode=0 Nov 25 15:22:38 crc kubenswrapper[4731]: I1125 15:22:38.922797 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" event={"ID":"141271e2-9df3-449f-9590-6d7c43c8563b","Type":"ContainerDied","Data":"3e0a25e5b707b2c4867451589f735ddc1758b45fad13c38b36d352245f48bdda"} Nov 25 15:22:40 crc kubenswrapper[4731]: I1125 15:22:40.342068 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" Nov 25 15:22:40 crc kubenswrapper[4731]: I1125 15:22:40.407742 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-ssh-key\") pod \"141271e2-9df3-449f-9590-6d7c43c8563b\" (UID: \"141271e2-9df3-449f-9590-6d7c43c8563b\") " Nov 25 15:22:40 crc kubenswrapper[4731]: I1125 15:22:40.407834 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4v8z\" (UniqueName: \"kubernetes.io/projected/141271e2-9df3-449f-9590-6d7c43c8563b-kube-api-access-p4v8z\") pod \"141271e2-9df3-449f-9590-6d7c43c8563b\" (UID: \"141271e2-9df3-449f-9590-6d7c43c8563b\") " Nov 25 15:22:40 crc kubenswrapper[4731]: I1125 15:22:40.408050 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-inventory\") pod \"141271e2-9df3-449f-9590-6d7c43c8563b\" (UID: \"141271e2-9df3-449f-9590-6d7c43c8563b\") " Nov 25 15:22:40 crc kubenswrapper[4731]: I1125 15:22:40.408144 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-bootstrap-combined-ca-bundle\") pod \"141271e2-9df3-449f-9590-6d7c43c8563b\" (UID: \"141271e2-9df3-449f-9590-6d7c43c8563b\") " Nov 25 15:22:40 crc kubenswrapper[4731]: I1125 15:22:40.413857 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/141271e2-9df3-449f-9590-6d7c43c8563b-kube-api-access-p4v8z" (OuterVolumeSpecName: "kube-api-access-p4v8z") pod "141271e2-9df3-449f-9590-6d7c43c8563b" (UID: "141271e2-9df3-449f-9590-6d7c43c8563b"). InnerVolumeSpecName "kube-api-access-p4v8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:22:40 crc kubenswrapper[4731]: I1125 15:22:40.413963 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "141271e2-9df3-449f-9590-6d7c43c8563b" (UID: "141271e2-9df3-449f-9590-6d7c43c8563b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:22:40 crc kubenswrapper[4731]: I1125 15:22:40.440134 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-inventory" (OuterVolumeSpecName: "inventory") pod "141271e2-9df3-449f-9590-6d7c43c8563b" (UID: "141271e2-9df3-449f-9590-6d7c43c8563b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:22:40 crc kubenswrapper[4731]: I1125 15:22:40.444146 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "141271e2-9df3-449f-9590-6d7c43c8563b" (UID: "141271e2-9df3-449f-9590-6d7c43c8563b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:22:40 crc kubenswrapper[4731]: I1125 15:22:40.510694 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:22:40 crc kubenswrapper[4731]: I1125 15:22:40.510739 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4v8z\" (UniqueName: \"kubernetes.io/projected/141271e2-9df3-449f-9590-6d7c43c8563b-kube-api-access-p4v8z\") on node \"crc\" DevicePath \"\"" Nov 25 15:22:40 crc kubenswrapper[4731]: I1125 15:22:40.510755 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:22:40 crc kubenswrapper[4731]: I1125 15:22:40.510767 4731 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141271e2-9df3-449f-9590-6d7c43c8563b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:22:40 crc kubenswrapper[4731]: I1125 15:22:40.942311 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" event={"ID":"141271e2-9df3-449f-9590-6d7c43c8563b","Type":"ContainerDied","Data":"458ddfe2f0d51a374f8ea6a1a3bcf7281d767a232e7e32fa67e814b9603f43f7"} Nov 25 15:22:40 crc kubenswrapper[4731]: I1125 15:22:40.942361 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="458ddfe2f0d51a374f8ea6a1a3bcf7281d767a232e7e32fa67e814b9603f43f7" Nov 25 15:22:40 crc kubenswrapper[4731]: I1125 15:22:40.942384 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.029202 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm"] Nov 25 15:22:41 crc kubenswrapper[4731]: E1125 15:22:41.029618 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141271e2-9df3-449f-9590-6d7c43c8563b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.029636 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="141271e2-9df3-449f-9590-6d7c43c8563b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 15:22:41 crc kubenswrapper[4731]: E1125 15:22:41.029654 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93db4c1a-26df-492e-bf72-4fa686e1a46a" containerName="extract-utilities" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.029661 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="93db4c1a-26df-492e-bf72-4fa686e1a46a" containerName="extract-utilities" Nov 25 15:22:41 crc kubenswrapper[4731]: E1125 15:22:41.029678 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93db4c1a-26df-492e-bf72-4fa686e1a46a" containerName="registry-server" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.029685 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="93db4c1a-26df-492e-bf72-4fa686e1a46a" containerName="registry-server" Nov 25 15:22:41 crc kubenswrapper[4731]: E1125 15:22:41.029707 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93db4c1a-26df-492e-bf72-4fa686e1a46a" containerName="extract-content" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.029715 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="93db4c1a-26df-492e-bf72-4fa686e1a46a" containerName="extract-content" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.029949 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="93db4c1a-26df-492e-bf72-4fa686e1a46a" containerName="registry-server" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.029963 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="141271e2-9df3-449f-9590-6d7c43c8563b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.030677 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.033951 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.034211 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.034385 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.034545 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.042306 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm"] Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.122096 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e902f4e-eb88-45a6-8150-73c6bce43b9f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm\" (UID: \"0e902f4e-eb88-45a6-8150-73c6bce43b9f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.122745 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e902f4e-eb88-45a6-8150-73c6bce43b9f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm\" (UID: \"0e902f4e-eb88-45a6-8150-73c6bce43b9f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.122799 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jvmg\" (UniqueName: \"kubernetes.io/projected/0e902f4e-eb88-45a6-8150-73c6bce43b9f-kube-api-access-8jvmg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm\" (UID: \"0e902f4e-eb88-45a6-8150-73c6bce43b9f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.224945 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e902f4e-eb88-45a6-8150-73c6bce43b9f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm\" (UID: \"0e902f4e-eb88-45a6-8150-73c6bce43b9f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.225015 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jvmg\" (UniqueName: \"kubernetes.io/projected/0e902f4e-eb88-45a6-8150-73c6bce43b9f-kube-api-access-8jvmg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm\" (UID: \"0e902f4e-eb88-45a6-8150-73c6bce43b9f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.225110 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e902f4e-eb88-45a6-8150-73c6bce43b9f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm\" (UID: \"0e902f4e-eb88-45a6-8150-73c6bce43b9f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.229423 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e902f4e-eb88-45a6-8150-73c6bce43b9f-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm\" (UID: \"0e902f4e-eb88-45a6-8150-73c6bce43b9f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.230123 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e902f4e-eb88-45a6-8150-73c6bce43b9f-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm\" (UID: \"0e902f4e-eb88-45a6-8150-73c6bce43b9f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.248145 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jvmg\" (UniqueName: \"kubernetes.io/projected/0e902f4e-eb88-45a6-8150-73c6bce43b9f-kube-api-access-8jvmg\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm\" (UID: \"0e902f4e-eb88-45a6-8150-73c6bce43b9f\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.349186 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.894424 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm"] Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.900216 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:22:41 crc kubenswrapper[4731]: I1125 15:22:41.951123 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" event={"ID":"0e902f4e-eb88-45a6-8150-73c6bce43b9f","Type":"ContainerStarted","Data":"8d1bb6131af3f7fe32bba98e8246689ef5c23275e4f88da41975ae61c732f6bb"} Nov 25 15:22:42 crc kubenswrapper[4731]: I1125 15:22:42.961198 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" event={"ID":"0e902f4e-eb88-45a6-8150-73c6bce43b9f","Type":"ContainerStarted","Data":"9399c829d58a017516934f6f718c1fabf4ccf9687c137e4c32c0c72bee5bc7af"} Nov 25 15:22:42 crc kubenswrapper[4731]: I1125 15:22:42.991844 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" podStartSLOduration=1.294207794 podStartE2EDuration="1.991807902s" podCreationTimestamp="2025-11-25 15:22:41 +0000 UTC" firstStartedPulling="2025-11-25 15:22:41.899909067 +0000 UTC m=+1720.647043507" lastFinishedPulling="2025-11-25 15:22:42.597509185 +0000 UTC m=+1721.344643615" observedRunningTime="2025-11-25 15:22:42.980841036 +0000 UTC m=+1721.727975466" watchObservedRunningTime="2025-11-25 15:22:42.991807902 +0000 UTC m=+1721.738942332" Nov 25 15:22:43 crc kubenswrapper[4731]: I1125 15:22:43.051369 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-n2zrw"] Nov 25 15:22:43 crc kubenswrapper[4731]: I1125 15:22:43.060405 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-q8jfj"] Nov 25 15:22:43 crc kubenswrapper[4731]: I1125 15:22:43.072025 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-18b0-account-create-bnvc5"] Nov 25 15:22:43 crc kubenswrapper[4731]: I1125 15:22:43.089946 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7e51-account-create-2kjwl"] Nov 25 15:22:43 crc kubenswrapper[4731]: I1125 15:22:43.105923 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-n2zrw"] Nov 25 15:22:43 crc kubenswrapper[4731]: I1125 15:22:43.116846 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-18b0-account-create-bnvc5"] Nov 25 15:22:43 crc kubenswrapper[4731]: I1125 15:22:43.127114 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7e51-account-create-2kjwl"] Nov 25 15:22:43 crc kubenswrapper[4731]: I1125 15:22:43.137269 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-q8jfj"] Nov 25 15:22:44 crc kubenswrapper[4731]: I1125 15:22:44.823667 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4092bd55-8a71-4286-a6e3-eca18e3ced0b" path="/var/lib/kubelet/pods/4092bd55-8a71-4286-a6e3-eca18e3ced0b/volumes" Nov 25 15:22:44 crc kubenswrapper[4731]: I1125 15:22:44.824509 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42f1fb22-05ce-49c4-b0fb-fd0c98a81216" path="/var/lib/kubelet/pods/42f1fb22-05ce-49c4-b0fb-fd0c98a81216/volumes" Nov 25 15:22:44 crc kubenswrapper[4731]: I1125 15:22:44.825065 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be16145-50e5-4de7-99c1-ad67f473f16f" path="/var/lib/kubelet/pods/7be16145-50e5-4de7-99c1-ad67f473f16f/volumes" Nov 25 15:22:44 crc kubenswrapper[4731]: I1125 15:22:44.825595 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e05a140c-c361-4e2a-9d20-822743eeceba" path="/var/lib/kubelet/pods/e05a140c-c361-4e2a-9d20-822743eeceba/volumes" Nov 25 15:22:45 crc kubenswrapper[4731]: I1125 15:22:45.028600 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-jfhp7"] Nov 25 15:22:45 crc kubenswrapper[4731]: I1125 15:22:45.038385 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9e93-account-create-59bwv"] Nov 25 15:22:45 crc kubenswrapper[4731]: I1125 15:22:45.047037 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-jfhp7"] Nov 25 15:22:45 crc kubenswrapper[4731]: I1125 15:22:45.055795 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9e93-account-create-59bwv"] Nov 25 15:22:45 crc kubenswrapper[4731]: I1125 15:22:45.811713 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:22:45 crc kubenswrapper[4731]: E1125 15:22:45.812270 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:22:46 crc kubenswrapper[4731]: I1125 15:22:46.821857 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a" path="/var/lib/kubelet/pods/0866bfc0-d1ef-4c65-bf79-71e9f9bbd07a/volumes" Nov 25 15:22:46 crc kubenswrapper[4731]: I1125 15:22:46.822419 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="369556c5-9264-481b-96e6-070055cab1ed" path="/var/lib/kubelet/pods/369556c5-9264-481b-96e6-070055cab1ed/volumes" Nov 25 15:23:00 crc kubenswrapper[4731]: I1125 15:23:00.812296 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:23:00 crc kubenswrapper[4731]: E1125 15:23:00.813098 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:23:12 crc kubenswrapper[4731]: I1125 15:23:12.818447 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:23:12 crc kubenswrapper[4731]: E1125 15:23:12.820132 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:23:13 crc kubenswrapper[4731]: I1125 15:23:13.589955 4731 scope.go:117] "RemoveContainer" containerID="99aff11bb58e4b1e1cbcbf3ae6ac7ae2472ac342fa26ed8dccc5e021b0448d36" Nov 25 15:23:13 crc kubenswrapper[4731]: I1125 15:23:13.618360 4731 scope.go:117] "RemoveContainer" containerID="f272995c2df8210d0df3e9972775067ab6307b04d5ca6511ce3a35771b29f150" Nov 25 15:23:13 crc kubenswrapper[4731]: I1125 15:23:13.673542 4731 scope.go:117] "RemoveContainer" containerID="c7d34b035a1fc1a946a3cd01625940867208e09b063ee8f7beff5580c148f214" Nov 25 15:23:13 crc kubenswrapper[4731]: I1125 15:23:13.697192 4731 scope.go:117] "RemoveContainer" containerID="6c3cb3b9773d85266db990098c6666aa14be4f987b39067202fc96f0b187e450" Nov 25 15:23:13 crc kubenswrapper[4731]: I1125 15:23:13.730650 4731 scope.go:117] "RemoveContainer" containerID="e2d41f7e2f2a0dd6bb0f3f0e9ff758834f6faab13ab36dc0e3829e16c92f10b8" Nov 25 15:23:13 crc kubenswrapper[4731]: I1125 15:23:13.747287 4731 scope.go:117] "RemoveContainer" containerID="ddeacfa616d8ace80553f5884cfc8d1f41fdf7be91cedaffbd498c83d100e72e" Nov 25 15:23:13 crc kubenswrapper[4731]: I1125 15:23:13.800125 4731 scope.go:117] "RemoveContainer" containerID="3764b418ade0428b5f39adf864a0c0fb1bd447c7baf88582ea2a1dc0d24e60d1" Nov 25 15:23:13 crc kubenswrapper[4731]: I1125 15:23:13.830758 4731 scope.go:117] "RemoveContainer" containerID="8bb4ca1679e23a7e4e3d45fddde41509b469cbf651a2b270c0de76ccf753d7d6" Nov 25 15:23:20 crc kubenswrapper[4731]: I1125 15:23:20.044601 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-v6khp"] Nov 25 15:23:20 crc kubenswrapper[4731]: I1125 15:23:20.053302 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-v6khp"] Nov 25 15:23:20 crc kubenswrapper[4731]: I1125 15:23:20.827913 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="057571f2-061b-4efa-8faf-3949fbae1889" path="/var/lib/kubelet/pods/057571f2-061b-4efa-8faf-3949fbae1889/volumes" Nov 25 15:23:21 crc kubenswrapper[4731]: I1125 15:23:21.033584 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-075e-account-create-lp2p6"] Nov 25 15:23:21 crc kubenswrapper[4731]: I1125 15:23:21.043474 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-s2pj9"] Nov 25 15:23:21 crc kubenswrapper[4731]: I1125 15:23:21.056863 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-7mzbq"] Nov 25 15:23:21 crc kubenswrapper[4731]: I1125 15:23:21.069078 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-075e-account-create-lp2p6"] Nov 25 15:23:21 crc kubenswrapper[4731]: I1125 15:23:21.079606 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-7mzbq"] Nov 25 15:23:21 crc kubenswrapper[4731]: I1125 15:23:21.091484 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-s2pj9"] Nov 25 15:23:21 crc kubenswrapper[4731]: I1125 15:23:21.101631 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-13ae-account-create-49z99"] Nov 25 15:23:21 crc kubenswrapper[4731]: I1125 15:23:21.110855 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-13ae-account-create-49z99"] Nov 25 15:23:22 crc kubenswrapper[4731]: I1125 15:23:22.822770 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b21bc434-c659-4153-ad06-7eb075fae022" path="/var/lib/kubelet/pods/b21bc434-c659-4153-ad06-7eb075fae022/volumes" Nov 25 15:23:22 crc kubenswrapper[4731]: I1125 15:23:22.823699 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19edf47-a4ad-4705-b1e7-b4316db0ac5f" path="/var/lib/kubelet/pods/f19edf47-a4ad-4705-b1e7-b4316db0ac5f/volumes" Nov 25 15:23:22 crc kubenswrapper[4731]: I1125 15:23:22.824303 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f30f9bba-77f8-4ec4-8a2c-0136424c403a" path="/var/lib/kubelet/pods/f30f9bba-77f8-4ec4-8a2c-0136424c403a/volumes" Nov 25 15:23:22 crc kubenswrapper[4731]: I1125 15:23:22.824897 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdeef619-da62-4c86-949e-1b795bdc5de3" path="/var/lib/kubelet/pods/fdeef619-da62-4c86-949e-1b795bdc5de3/volumes" Nov 25 15:23:23 crc kubenswrapper[4731]: I1125 15:23:23.813642 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:23:23 crc kubenswrapper[4731]: E1125 15:23:23.813922 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:23:28 crc kubenswrapper[4731]: I1125 15:23:28.044790 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-4917-account-create-kgths"] Nov 25 15:23:28 crc kubenswrapper[4731]: I1125 15:23:28.056507 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-c8qvw"] Nov 25 15:23:28 crc kubenswrapper[4731]: I1125 15:23:28.064457 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-jmh97"] Nov 25 15:23:28 crc kubenswrapper[4731]: I1125 15:23:28.076771 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-4a1e-account-create-znsxk"] Nov 25 15:23:28 crc kubenswrapper[4731]: I1125 15:23:28.090689 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-c8qvw"] Nov 25 15:23:28 crc kubenswrapper[4731]: I1125 15:23:28.102216 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-4917-account-create-kgths"] Nov 25 15:23:28 crc kubenswrapper[4731]: I1125 15:23:28.111117 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-jmh97"] Nov 25 15:23:28 crc kubenswrapper[4731]: I1125 15:23:28.119548 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-4a1e-account-create-znsxk"] Nov 25 15:23:28 crc kubenswrapper[4731]: I1125 15:23:28.824022 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26e04865-a9b1-495f-b0c3-0042b35c33c2" path="/var/lib/kubelet/pods/26e04865-a9b1-495f-b0c3-0042b35c33c2/volumes" Nov 25 15:23:28 crc kubenswrapper[4731]: I1125 15:23:28.825271 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8402be71-b88d-4091-a1ca-f7ceb7f8ba97" path="/var/lib/kubelet/pods/8402be71-b88d-4091-a1ca-f7ceb7f8ba97/volumes" Nov 25 15:23:28 crc kubenswrapper[4731]: I1125 15:23:28.826125 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f41eaec-9d8e-4a4a-bed0-2a225ace9e75" path="/var/lib/kubelet/pods/8f41eaec-9d8e-4a4a-bed0-2a225ace9e75/volumes" Nov 25 15:23:28 crc kubenswrapper[4731]: I1125 15:23:28.826667 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a791aaf4-bccc-458a-9072-a5f3e44a6ea6" path="/var/lib/kubelet/pods/a791aaf4-bccc-458a-9072-a5f3e44a6ea6/volumes" Nov 25 15:23:34 crc kubenswrapper[4731]: I1125 15:23:34.048617 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-trbrn"] Nov 25 15:23:34 crc kubenswrapper[4731]: I1125 15:23:34.059040 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-trbrn"] Nov 25 15:23:34 crc kubenswrapper[4731]: I1125 15:23:34.832324 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c59b5888-0ce3-4fa0-8e74-6ea06ff91390" path="/var/lib/kubelet/pods/c59b5888-0ce3-4fa0-8e74-6ea06ff91390/volumes" Nov 25 15:23:35 crc kubenswrapper[4731]: I1125 15:23:35.811278 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:23:35 crc kubenswrapper[4731]: E1125 15:23:35.811678 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:23:49 crc kubenswrapper[4731]: I1125 15:23:49.811260 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:23:49 crc kubenswrapper[4731]: E1125 15:23:49.812208 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:24:01 crc kubenswrapper[4731]: I1125 15:24:01.811715 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:24:01 crc kubenswrapper[4731]: E1125 15:24:01.813147 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:24:13 crc kubenswrapper[4731]: I1125 15:24:13.811852 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:24:13 crc kubenswrapper[4731]: E1125 15:24:13.812586 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:24:14 crc kubenswrapper[4731]: I1125 15:24:14.053366 4731 scope.go:117] "RemoveContainer" containerID="667b6e739a40b5475d4d29d25760a17a0c925f01422df31b55f21dff8d11f1bf" Nov 25 15:24:14 crc kubenswrapper[4731]: I1125 15:24:14.088654 4731 scope.go:117] "RemoveContainer" containerID="21dff3116e065585b7d94d9a6e85269ff40452a7e8651234cc72dcffe2f88453" Nov 25 15:24:14 crc kubenswrapper[4731]: I1125 15:24:14.140476 4731 scope.go:117] "RemoveContainer" containerID="7318225c192530846011681644b8be91cf945538e6bcbcad68cb94661455b779" Nov 25 15:24:14 crc kubenswrapper[4731]: I1125 15:24:14.190117 4731 scope.go:117] "RemoveContainer" containerID="be7483707aa511ff55b4bf374a66218106b1ad995cc1ddad1f14f40b4bf63632" Nov 25 15:24:14 crc kubenswrapper[4731]: I1125 15:24:14.242837 4731 scope.go:117] "RemoveContainer" containerID="6444728cfd18234fbcce00ceadcac4fff9a97f1e22b72cde4ebcdb152a3578c7" Nov 25 15:24:14 crc kubenswrapper[4731]: I1125 15:24:14.291900 4731 scope.go:117] "RemoveContainer" containerID="8492cc2ca277d0639fd265588a1f21d5a50a15adca8def590798fc3ca8cde89b" Nov 25 15:24:14 crc kubenswrapper[4731]: I1125 15:24:14.352743 4731 scope.go:117] "RemoveContainer" containerID="c71c9409472172c011a8303a9ee45b954237837a31dbc5a812378a1b1008eb74" Nov 25 15:24:14 crc kubenswrapper[4731]: I1125 15:24:14.387671 4731 scope.go:117] "RemoveContainer" containerID="0359022f3196b281024530f3ed45d5e1e80df84f485f1ee6a9d222a6c2eb1b1a" Nov 25 15:24:14 crc kubenswrapper[4731]: I1125 15:24:14.408121 4731 scope.go:117] "RemoveContainer" containerID="7138424c8158608f0ad55d8af270a01abfb9157a812ea9a061509ea8994f9d32" Nov 25 15:24:14 crc kubenswrapper[4731]: I1125 15:24:14.434627 4731 scope.go:117] "RemoveContainer" containerID="7e4ec7297ba9d0806edbfc9a236510f8dec15b38d5ca5bd26e79192d0fe10a93" Nov 25 15:24:16 crc kubenswrapper[4731]: I1125 15:24:16.044066 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-fcglk"] Nov 25 15:24:16 crc kubenswrapper[4731]: I1125 15:24:16.052716 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-fcglk"] Nov 25 15:24:16 crc kubenswrapper[4731]: I1125 15:24:16.822101 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a9b890-4390-4543-ad98-6e8fd9c0d627" path="/var/lib/kubelet/pods/b0a9b890-4390-4543-ad98-6e8fd9c0d627/volumes" Nov 25 15:24:17 crc kubenswrapper[4731]: I1125 15:24:17.032095 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-n6xcb"] Nov 25 15:24:17 crc kubenswrapper[4731]: I1125 15:24:17.041195 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-vs6g5"] Nov 25 15:24:17 crc kubenswrapper[4731]: I1125 15:24:17.050926 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-n6xcb"] Nov 25 15:24:17 crc kubenswrapper[4731]: I1125 15:24:17.061066 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-vs6g5"] Nov 25 15:24:18 crc kubenswrapper[4731]: I1125 15:24:18.815681 4731 generic.go:334] "Generic (PLEG): container finished" podID="0e902f4e-eb88-45a6-8150-73c6bce43b9f" containerID="9399c829d58a017516934f6f718c1fabf4ccf9687c137e4c32c0c72bee5bc7af" exitCode=0 Nov 25 15:24:18 crc kubenswrapper[4731]: I1125 15:24:18.834557 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06c14d90-d1d0-43ce-b4a2-3b36dad72a8e" path="/var/lib/kubelet/pods/06c14d90-d1d0-43ce-b4a2-3b36dad72a8e/volumes" Nov 25 15:24:18 crc kubenswrapper[4731]: I1125 15:24:18.835146 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8836cab0-6606-45e6-adb7-2797a9e7d022" path="/var/lib/kubelet/pods/8836cab0-6606-45e6-adb7-2797a9e7d022/volumes" Nov 25 15:24:18 crc kubenswrapper[4731]: I1125 15:24:18.835630 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" event={"ID":"0e902f4e-eb88-45a6-8150-73c6bce43b9f","Type":"ContainerDied","Data":"9399c829d58a017516934f6f718c1fabf4ccf9687c137e4c32c0c72bee5bc7af"} Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.285770 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.440851 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e902f4e-eb88-45a6-8150-73c6bce43b9f-ssh-key\") pod \"0e902f4e-eb88-45a6-8150-73c6bce43b9f\" (UID: \"0e902f4e-eb88-45a6-8150-73c6bce43b9f\") " Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.440921 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e902f4e-eb88-45a6-8150-73c6bce43b9f-inventory\") pod \"0e902f4e-eb88-45a6-8150-73c6bce43b9f\" (UID: \"0e902f4e-eb88-45a6-8150-73c6bce43b9f\") " Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.440947 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jvmg\" (UniqueName: \"kubernetes.io/projected/0e902f4e-eb88-45a6-8150-73c6bce43b9f-kube-api-access-8jvmg\") pod \"0e902f4e-eb88-45a6-8150-73c6bce43b9f\" (UID: \"0e902f4e-eb88-45a6-8150-73c6bce43b9f\") " Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.455044 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e902f4e-eb88-45a6-8150-73c6bce43b9f-kube-api-access-8jvmg" (OuterVolumeSpecName: "kube-api-access-8jvmg") pod "0e902f4e-eb88-45a6-8150-73c6bce43b9f" (UID: "0e902f4e-eb88-45a6-8150-73c6bce43b9f"). InnerVolumeSpecName "kube-api-access-8jvmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.470086 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e902f4e-eb88-45a6-8150-73c6bce43b9f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0e902f4e-eb88-45a6-8150-73c6bce43b9f" (UID: "0e902f4e-eb88-45a6-8150-73c6bce43b9f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.473961 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e902f4e-eb88-45a6-8150-73c6bce43b9f-inventory" (OuterVolumeSpecName: "inventory") pod "0e902f4e-eb88-45a6-8150-73c6bce43b9f" (UID: "0e902f4e-eb88-45a6-8150-73c6bce43b9f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.543512 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0e902f4e-eb88-45a6-8150-73c6bce43b9f-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.543600 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0e902f4e-eb88-45a6-8150-73c6bce43b9f-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.543627 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jvmg\" (UniqueName: \"kubernetes.io/projected/0e902f4e-eb88-45a6-8150-73c6bce43b9f-kube-api-access-8jvmg\") on node \"crc\" DevicePath \"\"" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.834065 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" event={"ID":"0e902f4e-eb88-45a6-8150-73c6bce43b9f","Type":"ContainerDied","Data":"8d1bb6131af3f7fe32bba98e8246689ef5c23275e4f88da41975ae61c732f6bb"} Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.834731 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d1bb6131af3f7fe32bba98e8246689ef5c23275e4f88da41975ae61c732f6bb" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.834154 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.940972 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5"] Nov 25 15:24:20 crc kubenswrapper[4731]: E1125 15:24:20.941562 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e902f4e-eb88-45a6-8150-73c6bce43b9f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.941660 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e902f4e-eb88-45a6-8150-73c6bce43b9f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.941926 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e902f4e-eb88-45a6-8150-73c6bce43b9f" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.942570 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.944776 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.945278 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.945545 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.945681 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:24:20 crc kubenswrapper[4731]: I1125 15:24:20.954598 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5"] Nov 25 15:24:21 crc kubenswrapper[4731]: I1125 15:24:21.053890 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhqdp\" (UniqueName: \"kubernetes.io/projected/75b68a16-e019-49f8-89b7-daa43f7c1fa0-kube-api-access-dhqdp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5\" (UID: \"75b68a16-e019-49f8-89b7-daa43f7c1fa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" Nov 25 15:24:21 crc kubenswrapper[4731]: I1125 15:24:21.053987 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75b68a16-e019-49f8-89b7-daa43f7c1fa0-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5\" (UID: \"75b68a16-e019-49f8-89b7-daa43f7c1fa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" Nov 25 15:24:21 crc kubenswrapper[4731]: I1125 15:24:21.054287 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75b68a16-e019-49f8-89b7-daa43f7c1fa0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5\" (UID: \"75b68a16-e019-49f8-89b7-daa43f7c1fa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" Nov 25 15:24:21 crc kubenswrapper[4731]: I1125 15:24:21.155999 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75b68a16-e019-49f8-89b7-daa43f7c1fa0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5\" (UID: \"75b68a16-e019-49f8-89b7-daa43f7c1fa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" Nov 25 15:24:21 crc kubenswrapper[4731]: I1125 15:24:21.156104 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhqdp\" (UniqueName: \"kubernetes.io/projected/75b68a16-e019-49f8-89b7-daa43f7c1fa0-kube-api-access-dhqdp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5\" (UID: \"75b68a16-e019-49f8-89b7-daa43f7c1fa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" Nov 25 15:24:21 crc kubenswrapper[4731]: I1125 15:24:21.156176 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75b68a16-e019-49f8-89b7-daa43f7c1fa0-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5\" (UID: \"75b68a16-e019-49f8-89b7-daa43f7c1fa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" Nov 25 15:24:21 crc kubenswrapper[4731]: I1125 15:24:21.159458 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75b68a16-e019-49f8-89b7-daa43f7c1fa0-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5\" (UID: \"75b68a16-e019-49f8-89b7-daa43f7c1fa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" Nov 25 15:24:21 crc kubenswrapper[4731]: I1125 15:24:21.166379 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75b68a16-e019-49f8-89b7-daa43f7c1fa0-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5\" (UID: \"75b68a16-e019-49f8-89b7-daa43f7c1fa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" Nov 25 15:24:21 crc kubenswrapper[4731]: I1125 15:24:21.184619 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhqdp\" (UniqueName: \"kubernetes.io/projected/75b68a16-e019-49f8-89b7-daa43f7c1fa0-kube-api-access-dhqdp\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5\" (UID: \"75b68a16-e019-49f8-89b7-daa43f7c1fa0\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" Nov 25 15:24:21 crc kubenswrapper[4731]: I1125 15:24:21.262490 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" Nov 25 15:24:21 crc kubenswrapper[4731]: I1125 15:24:21.797499 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5"] Nov 25 15:24:21 crc kubenswrapper[4731]: I1125 15:24:21.848034 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" event={"ID":"75b68a16-e019-49f8-89b7-daa43f7c1fa0","Type":"ContainerStarted","Data":"7541f647753b79f583284f71b421e3bdd231e54a0e6095b1015248aa22f382c9"} Nov 25 15:24:23 crc kubenswrapper[4731]: I1125 15:24:23.868756 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" event={"ID":"75b68a16-e019-49f8-89b7-daa43f7c1fa0","Type":"ContainerStarted","Data":"940148aaa628b4fcc7e024a5fd5e8ea652237aae6c966f3ba15364a70b7d8c77"} Nov 25 15:24:23 crc kubenswrapper[4731]: I1125 15:24:23.903806 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" podStartSLOduration=2.680362646 podStartE2EDuration="3.903784027s" podCreationTimestamp="2025-11-25 15:24:20 +0000 UTC" firstStartedPulling="2025-11-25 15:24:21.801927186 +0000 UTC m=+1820.549061616" lastFinishedPulling="2025-11-25 15:24:23.025348567 +0000 UTC m=+1821.772482997" observedRunningTime="2025-11-25 15:24:23.885515236 +0000 UTC m=+1822.632649676" watchObservedRunningTime="2025-11-25 15:24:23.903784027 +0000 UTC m=+1822.650918457" Nov 25 15:24:25 crc kubenswrapper[4731]: I1125 15:24:25.811628 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:24:25 crc kubenswrapper[4731]: E1125 15:24:25.812155 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:24:30 crc kubenswrapper[4731]: I1125 15:24:30.036842 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-kplpn"] Nov 25 15:24:30 crc kubenswrapper[4731]: I1125 15:24:30.047501 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-kplpn"] Nov 25 15:24:30 crc kubenswrapper[4731]: I1125 15:24:30.823392 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5547f3b-298a-4906-8c60-1f51c87c1059" path="/var/lib/kubelet/pods/b5547f3b-298a-4906-8c60-1f51c87c1059/volumes" Nov 25 15:24:31 crc kubenswrapper[4731]: I1125 15:24:31.030742 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-c55qz"] Nov 25 15:24:31 crc kubenswrapper[4731]: I1125 15:24:31.040534 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-c55qz"] Nov 25 15:24:32 crc kubenswrapper[4731]: I1125 15:24:32.824701 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf94950-c565-46c7-95b1-9a62e2915979" path="/var/lib/kubelet/pods/4bf94950-c565-46c7-95b1-9a62e2915979/volumes" Nov 25 15:24:36 crc kubenswrapper[4731]: I1125 15:24:36.045764 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-dv2l4"] Nov 25 15:24:36 crc kubenswrapper[4731]: I1125 15:24:36.055276 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-dv2l4"] Nov 25 15:24:36 crc kubenswrapper[4731]: I1125 15:24:36.828092 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d828d0bf-a1e4-45a4-a840-4ddd1f84ed79" path="/var/lib/kubelet/pods/d828d0bf-a1e4-45a4-a840-4ddd1f84ed79/volumes" Nov 25 15:24:39 crc kubenswrapper[4731]: I1125 15:24:39.811868 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:24:39 crc kubenswrapper[4731]: E1125 15:24:39.812494 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:24:54 crc kubenswrapper[4731]: I1125 15:24:54.812486 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:24:54 crc kubenswrapper[4731]: E1125 15:24:54.815261 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:25:08 crc kubenswrapper[4731]: I1125 15:25:08.812878 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:25:08 crc kubenswrapper[4731]: E1125 15:25:08.813719 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:25:11 crc kubenswrapper[4731]: I1125 15:25:11.050547 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-vnjcq"] Nov 25 15:25:11 crc kubenswrapper[4731]: I1125 15:25:11.060348 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-8101-account-create-v6vwr"] Nov 25 15:25:11 crc kubenswrapper[4731]: I1125 15:25:11.070781 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-9wqvh"] Nov 25 15:25:11 crc kubenswrapper[4731]: I1125 15:25:11.079334 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-3d9a-account-create-584gn"] Nov 25 15:25:11 crc kubenswrapper[4731]: I1125 15:25:11.088738 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-50d5-account-create-2svk8"] Nov 25 15:25:11 crc kubenswrapper[4731]: I1125 15:25:11.097347 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-9wqvh"] Nov 25 15:25:11 crc kubenswrapper[4731]: I1125 15:25:11.109614 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-8101-account-create-v6vwr"] Nov 25 15:25:11 crc kubenswrapper[4731]: I1125 15:25:11.119804 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-vnjcq"] Nov 25 15:25:11 crc kubenswrapper[4731]: I1125 15:25:11.132046 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-3d9a-account-create-584gn"] Nov 25 15:25:11 crc kubenswrapper[4731]: I1125 15:25:11.142455 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-50d5-account-create-2svk8"] Nov 25 15:25:12 crc kubenswrapper[4731]: I1125 15:25:12.032125 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-pmklk"] Nov 25 15:25:12 crc kubenswrapper[4731]: I1125 15:25:12.041867 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-pmklk"] Nov 25 15:25:12 crc kubenswrapper[4731]: I1125 15:25:12.826103 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01b0994d-ff1b-47bf-8728-965ce2899e08" path="/var/lib/kubelet/pods/01b0994d-ff1b-47bf-8728-965ce2899e08/volumes" Nov 25 15:25:12 crc kubenswrapper[4731]: I1125 15:25:12.828012 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c2c1f53-61bb-4595-a5c9-17be8b0868a0" path="/var/lib/kubelet/pods/4c2c1f53-61bb-4595-a5c9-17be8b0868a0/volumes" Nov 25 15:25:12 crc kubenswrapper[4731]: I1125 15:25:12.828769 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="797ff534-6f19-4979-afc8-dfd592e24603" path="/var/lib/kubelet/pods/797ff534-6f19-4979-afc8-dfd592e24603/volumes" Nov 25 15:25:12 crc kubenswrapper[4731]: I1125 15:25:12.829358 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b33ccf64-af15-436e-ac8b-9c9efc83f08f" path="/var/lib/kubelet/pods/b33ccf64-af15-436e-ac8b-9c9efc83f08f/volumes" Nov 25 15:25:12 crc kubenswrapper[4731]: I1125 15:25:12.829931 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b775affe-04bb-418b-aae9-9a83db82995a" path="/var/lib/kubelet/pods/b775affe-04bb-418b-aae9-9a83db82995a/volumes" Nov 25 15:25:12 crc kubenswrapper[4731]: I1125 15:25:12.831110 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4998d67-3897-41fa-8a9d-f7e37c102a61" path="/var/lib/kubelet/pods/d4998d67-3897-41fa-8a9d-f7e37c102a61/volumes" Nov 25 15:25:14 crc kubenswrapper[4731]: I1125 15:25:14.658783 4731 scope.go:117] "RemoveContainer" containerID="7e4aadd6127f2215a50ad8ce519d0d9de9adad382ad612c5cc67fb349208655d" Nov 25 15:25:14 crc kubenswrapper[4731]: I1125 15:25:14.710383 4731 scope.go:117] "RemoveContainer" containerID="384eb658513fd92b8782dbb00c5dfa23521521b7546773d77d9c8c3da88c497b" Nov 25 15:25:14 crc kubenswrapper[4731]: I1125 15:25:14.740584 4731 scope.go:117] "RemoveContainer" containerID="32e56b99379bffd1ca4c6d2daf598c435fb0e4739aacdef3d7f1868f24cbd2c0" Nov 25 15:25:14 crc kubenswrapper[4731]: I1125 15:25:14.821709 4731 scope.go:117] "RemoveContainer" containerID="aa7f46f602fa1d54626fbdb510cd866e7a0e9485550128840459249493c5e6ec" Nov 25 15:25:14 crc kubenswrapper[4731]: I1125 15:25:14.846385 4731 scope.go:117] "RemoveContainer" containerID="8164a5f4fe6eccb52e84c5e0589667efd541b0f6efc7c6d1b52df8790ab1f3aa" Nov 25 15:25:14 crc kubenswrapper[4731]: I1125 15:25:14.910699 4731 scope.go:117] "RemoveContainer" containerID="d9d3f7c1519d1f94e9406193a59fad784785791a8422e29b1b93cb85ffdc8dec" Nov 25 15:25:14 crc kubenswrapper[4731]: I1125 15:25:14.955147 4731 scope.go:117] "RemoveContainer" containerID="c95c37daa5f3d56238e13045098d3e9c791a99e322def87bbeb255fa6683322d" Nov 25 15:25:14 crc kubenswrapper[4731]: I1125 15:25:14.980543 4731 scope.go:117] "RemoveContainer" containerID="151dfab801c3d57bf692d82a7ca8bb547a4d0ffc8a5ba915c151846e679957bc" Nov 25 15:25:15 crc kubenswrapper[4731]: I1125 15:25:15.004494 4731 scope.go:117] "RemoveContainer" containerID="abee0b8c4825cfdcf9430aa40abb5d70a1777babd7e9536d2175ac8191bc10a4" Nov 25 15:25:15 crc kubenswrapper[4731]: I1125 15:25:15.051315 4731 scope.go:117] "RemoveContainer" containerID="e971e1d970148de4c04f359b1dcd11e5d8557b952b092fec482781081388f214" Nov 25 15:25:15 crc kubenswrapper[4731]: I1125 15:25:15.094977 4731 scope.go:117] "RemoveContainer" containerID="4f40de864f3fb09f6a4a92392fb48c7780105855c10b7061497df7791acd5e20" Nov 25 15:25:15 crc kubenswrapper[4731]: I1125 15:25:15.121917 4731 scope.go:117] "RemoveContainer" containerID="9846acd7dc9890503766b946ca9c7f6b6bb6b6911fbd69f390fb77dfc0adbd35" Nov 25 15:25:22 crc kubenswrapper[4731]: I1125 15:25:22.820662 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:25:23 crc kubenswrapper[4731]: I1125 15:25:23.444290 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"b1fa04dd595babcd00ffd658c2b29e3b04674535b213d4a578d43ab2b176f067"} Nov 25 15:25:39 crc kubenswrapper[4731]: I1125 15:25:39.584652 4731 generic.go:334] "Generic (PLEG): container finished" podID="75b68a16-e019-49f8-89b7-daa43f7c1fa0" containerID="940148aaa628b4fcc7e024a5fd5e8ea652237aae6c966f3ba15364a70b7d8c77" exitCode=0 Nov 25 15:25:39 crc kubenswrapper[4731]: I1125 15:25:39.584752 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" event={"ID":"75b68a16-e019-49f8-89b7-daa43f7c1fa0","Type":"ContainerDied","Data":"940148aaa628b4fcc7e024a5fd5e8ea652237aae6c966f3ba15364a70b7d8c77"} Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.006517 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.112362 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhqdp\" (UniqueName: \"kubernetes.io/projected/75b68a16-e019-49f8-89b7-daa43f7c1fa0-kube-api-access-dhqdp\") pod \"75b68a16-e019-49f8-89b7-daa43f7c1fa0\" (UID: \"75b68a16-e019-49f8-89b7-daa43f7c1fa0\") " Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.112687 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75b68a16-e019-49f8-89b7-daa43f7c1fa0-inventory\") pod \"75b68a16-e019-49f8-89b7-daa43f7c1fa0\" (UID: \"75b68a16-e019-49f8-89b7-daa43f7c1fa0\") " Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.112908 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75b68a16-e019-49f8-89b7-daa43f7c1fa0-ssh-key\") pod \"75b68a16-e019-49f8-89b7-daa43f7c1fa0\" (UID: \"75b68a16-e019-49f8-89b7-daa43f7c1fa0\") " Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.120314 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75b68a16-e019-49f8-89b7-daa43f7c1fa0-kube-api-access-dhqdp" (OuterVolumeSpecName: "kube-api-access-dhqdp") pod "75b68a16-e019-49f8-89b7-daa43f7c1fa0" (UID: "75b68a16-e019-49f8-89b7-daa43f7c1fa0"). InnerVolumeSpecName "kube-api-access-dhqdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.145956 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b68a16-e019-49f8-89b7-daa43f7c1fa0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "75b68a16-e019-49f8-89b7-daa43f7c1fa0" (UID: "75b68a16-e019-49f8-89b7-daa43f7c1fa0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.151700 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75b68a16-e019-49f8-89b7-daa43f7c1fa0-inventory" (OuterVolumeSpecName: "inventory") pod "75b68a16-e019-49f8-89b7-daa43f7c1fa0" (UID: "75b68a16-e019-49f8-89b7-daa43f7c1fa0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.218181 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhqdp\" (UniqueName: \"kubernetes.io/projected/75b68a16-e019-49f8-89b7-daa43f7c1fa0-kube-api-access-dhqdp\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.218402 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/75b68a16-e019-49f8-89b7-daa43f7c1fa0-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.218465 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/75b68a16-e019-49f8-89b7-daa43f7c1fa0-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.604434 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" event={"ID":"75b68a16-e019-49f8-89b7-daa43f7c1fa0","Type":"ContainerDied","Data":"7541f647753b79f583284f71b421e3bdd231e54a0e6095b1015248aa22f382c9"} Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.604787 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7541f647753b79f583284f71b421e3bdd231e54a0e6095b1015248aa22f382c9" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.604497 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.681797 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp"] Nov 25 15:25:41 crc kubenswrapper[4731]: E1125 15:25:41.682324 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75b68a16-e019-49f8-89b7-daa43f7c1fa0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.682347 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="75b68a16-e019-49f8-89b7-daa43f7c1fa0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.682579 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="75b68a16-e019-49f8-89b7-daa43f7c1fa0" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.683399 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.686598 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.686867 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.687027 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.689091 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.691827 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp"] Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.829303 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/15424794-1bec-4dde-9c0c-7828167a0bd7-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp\" (UID: \"15424794-1bec-4dde-9c0c-7828167a0bd7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.829848 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7vfn\" (UniqueName: \"kubernetes.io/projected/15424794-1bec-4dde-9c0c-7828167a0bd7-kube-api-access-r7vfn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp\" (UID: \"15424794-1bec-4dde-9c0c-7828167a0bd7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.830553 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15424794-1bec-4dde-9c0c-7828167a0bd7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp\" (UID: \"15424794-1bec-4dde-9c0c-7828167a0bd7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.933076 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/15424794-1bec-4dde-9c0c-7828167a0bd7-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp\" (UID: \"15424794-1bec-4dde-9c0c-7828167a0bd7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.933179 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7vfn\" (UniqueName: \"kubernetes.io/projected/15424794-1bec-4dde-9c0c-7828167a0bd7-kube-api-access-r7vfn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp\" (UID: \"15424794-1bec-4dde-9c0c-7828167a0bd7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.933321 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15424794-1bec-4dde-9c0c-7828167a0bd7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp\" (UID: \"15424794-1bec-4dde-9c0c-7828167a0bd7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.937123 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15424794-1bec-4dde-9c0c-7828167a0bd7-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp\" (UID: \"15424794-1bec-4dde-9c0c-7828167a0bd7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.937463 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/15424794-1bec-4dde-9c0c-7828167a0bd7-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp\" (UID: \"15424794-1bec-4dde-9c0c-7828167a0bd7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.954201 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7vfn\" (UniqueName: \"kubernetes.io/projected/15424794-1bec-4dde-9c0c-7828167a0bd7-kube-api-access-r7vfn\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp\" (UID: \"15424794-1bec-4dde-9c0c-7828167a0bd7\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" Nov 25 15:25:41 crc kubenswrapper[4731]: I1125 15:25:41.998383 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" Nov 25 15:25:42 crc kubenswrapper[4731]: W1125 15:25:42.580269 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15424794_1bec_4dde_9c0c_7828167a0bd7.slice/crio-02582dda259d0d53ec525c021b64f420a8ce71326275090ddab465a005985303 WatchSource:0}: Error finding container 02582dda259d0d53ec525c021b64f420a8ce71326275090ddab465a005985303: Status 404 returned error can't find the container with id 02582dda259d0d53ec525c021b64f420a8ce71326275090ddab465a005985303 Nov 25 15:25:42 crc kubenswrapper[4731]: I1125 15:25:42.582489 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp"] Nov 25 15:25:42 crc kubenswrapper[4731]: I1125 15:25:42.621397 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" event={"ID":"15424794-1bec-4dde-9c0c-7828167a0bd7","Type":"ContainerStarted","Data":"02582dda259d0d53ec525c021b64f420a8ce71326275090ddab465a005985303"} Nov 25 15:25:45 crc kubenswrapper[4731]: I1125 15:25:45.647382 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" event={"ID":"15424794-1bec-4dde-9c0c-7828167a0bd7","Type":"ContainerStarted","Data":"55b7d077601e2304c0d692c17c6eb1fb344d566fe216e9494af3e26d2e970dde"} Nov 25 15:25:45 crc kubenswrapper[4731]: I1125 15:25:45.667552 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" podStartSLOduration=2.325900982 podStartE2EDuration="4.667534678s" podCreationTimestamp="2025-11-25 15:25:41 +0000 UTC" firstStartedPulling="2025-11-25 15:25:42.58378932 +0000 UTC m=+1901.330923750" lastFinishedPulling="2025-11-25 15:25:44.925423016 +0000 UTC m=+1903.672557446" observedRunningTime="2025-11-25 15:25:45.662254868 +0000 UTC m=+1904.409389298" watchObservedRunningTime="2025-11-25 15:25:45.667534678 +0000 UTC m=+1904.414669108" Nov 25 15:25:51 crc kubenswrapper[4731]: I1125 15:25:51.698204 4731 generic.go:334] "Generic (PLEG): container finished" podID="15424794-1bec-4dde-9c0c-7828167a0bd7" containerID="55b7d077601e2304c0d692c17c6eb1fb344d566fe216e9494af3e26d2e970dde" exitCode=0 Nov 25 15:25:51 crc kubenswrapper[4731]: I1125 15:25:51.698318 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" event={"ID":"15424794-1bec-4dde-9c0c-7828167a0bd7","Type":"ContainerDied","Data":"55b7d077601e2304c0d692c17c6eb1fb344d566fe216e9494af3e26d2e970dde"} Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.089100 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.246990 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15424794-1bec-4dde-9c0c-7828167a0bd7-inventory\") pod \"15424794-1bec-4dde-9c0c-7828167a0bd7\" (UID: \"15424794-1bec-4dde-9c0c-7828167a0bd7\") " Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.247091 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/15424794-1bec-4dde-9c0c-7828167a0bd7-ssh-key\") pod \"15424794-1bec-4dde-9c0c-7828167a0bd7\" (UID: \"15424794-1bec-4dde-9c0c-7828167a0bd7\") " Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.247235 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7vfn\" (UniqueName: \"kubernetes.io/projected/15424794-1bec-4dde-9c0c-7828167a0bd7-kube-api-access-r7vfn\") pod \"15424794-1bec-4dde-9c0c-7828167a0bd7\" (UID: \"15424794-1bec-4dde-9c0c-7828167a0bd7\") " Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.255200 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15424794-1bec-4dde-9c0c-7828167a0bd7-kube-api-access-r7vfn" (OuterVolumeSpecName: "kube-api-access-r7vfn") pod "15424794-1bec-4dde-9c0c-7828167a0bd7" (UID: "15424794-1bec-4dde-9c0c-7828167a0bd7"). InnerVolumeSpecName "kube-api-access-r7vfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.276882 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15424794-1bec-4dde-9c0c-7828167a0bd7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "15424794-1bec-4dde-9c0c-7828167a0bd7" (UID: "15424794-1bec-4dde-9c0c-7828167a0bd7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.280616 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15424794-1bec-4dde-9c0c-7828167a0bd7-inventory" (OuterVolumeSpecName: "inventory") pod "15424794-1bec-4dde-9c0c-7828167a0bd7" (UID: "15424794-1bec-4dde-9c0c-7828167a0bd7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.349751 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7vfn\" (UniqueName: \"kubernetes.io/projected/15424794-1bec-4dde-9c0c-7828167a0bd7-kube-api-access-r7vfn\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.349791 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/15424794-1bec-4dde-9c0c-7828167a0bd7-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.349801 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/15424794-1bec-4dde-9c0c-7828167a0bd7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.722945 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" event={"ID":"15424794-1bec-4dde-9c0c-7828167a0bd7","Type":"ContainerDied","Data":"02582dda259d0d53ec525c021b64f420a8ce71326275090ddab465a005985303"} Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.723004 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02582dda259d0d53ec525c021b64f420a8ce71326275090ddab465a005985303" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.723107 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.792653 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57"] Nov 25 15:25:53 crc kubenswrapper[4731]: E1125 15:25:53.793069 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15424794-1bec-4dde-9c0c-7828167a0bd7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.793087 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="15424794-1bec-4dde-9c0c-7828167a0bd7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.793262 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="15424794-1bec-4dde-9c0c-7828167a0bd7" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.793852 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.795665 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.796109 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.796613 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.796864 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.807043 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57"] Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.962240 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4lkj\" (UniqueName: \"kubernetes.io/projected/026d447e-1194-4316-b484-6525a214f761-kube-api-access-d4lkj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lls57\" (UID: \"026d447e-1194-4316-b484-6525a214f761\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.962338 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/026d447e-1194-4316-b484-6525a214f761-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lls57\" (UID: \"026d447e-1194-4316-b484-6525a214f761\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" Nov 25 15:25:53 crc kubenswrapper[4731]: I1125 15:25:53.962407 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/026d447e-1194-4316-b484-6525a214f761-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lls57\" (UID: \"026d447e-1194-4316-b484-6525a214f761\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" Nov 25 15:25:54 crc kubenswrapper[4731]: I1125 15:25:54.064991 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/026d447e-1194-4316-b484-6525a214f761-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lls57\" (UID: \"026d447e-1194-4316-b484-6525a214f761\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" Nov 25 15:25:54 crc kubenswrapper[4731]: I1125 15:25:54.065218 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4lkj\" (UniqueName: \"kubernetes.io/projected/026d447e-1194-4316-b484-6525a214f761-kube-api-access-d4lkj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lls57\" (UID: \"026d447e-1194-4316-b484-6525a214f761\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" Nov 25 15:25:54 crc kubenswrapper[4731]: I1125 15:25:54.065348 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/026d447e-1194-4316-b484-6525a214f761-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lls57\" (UID: \"026d447e-1194-4316-b484-6525a214f761\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" Nov 25 15:25:54 crc kubenswrapper[4731]: I1125 15:25:54.069273 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/026d447e-1194-4316-b484-6525a214f761-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lls57\" (UID: \"026d447e-1194-4316-b484-6525a214f761\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" Nov 25 15:25:54 crc kubenswrapper[4731]: I1125 15:25:54.072367 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/026d447e-1194-4316-b484-6525a214f761-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lls57\" (UID: \"026d447e-1194-4316-b484-6525a214f761\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" Nov 25 15:25:54 crc kubenswrapper[4731]: I1125 15:25:54.099562 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4lkj\" (UniqueName: \"kubernetes.io/projected/026d447e-1194-4316-b484-6525a214f761-kube-api-access-d4lkj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-lls57\" (UID: \"026d447e-1194-4316-b484-6525a214f761\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" Nov 25 15:25:54 crc kubenswrapper[4731]: I1125 15:25:54.113753 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" Nov 25 15:25:54 crc kubenswrapper[4731]: I1125 15:25:54.741848 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57"] Nov 25 15:25:55 crc kubenswrapper[4731]: I1125 15:25:55.741741 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" event={"ID":"026d447e-1194-4316-b484-6525a214f761","Type":"ContainerStarted","Data":"ec66dc51f879f130b46c8039267edd41667c8930ce6bc76d8c2157da4f36f9a0"} Nov 25 15:25:58 crc kubenswrapper[4731]: I1125 15:25:58.769110 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" event={"ID":"026d447e-1194-4316-b484-6525a214f761","Type":"ContainerStarted","Data":"202642733a6411af15a51f6e11316299e2d8efd1e51a7cff3fc7e28e77cde80b"} Nov 25 15:25:59 crc kubenswrapper[4731]: I1125 15:25:59.798680 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" podStartSLOduration=4.108344587 podStartE2EDuration="6.798663285s" podCreationTimestamp="2025-11-25 15:25:53 +0000 UTC" firstStartedPulling="2025-11-25 15:25:54.754726237 +0000 UTC m=+1913.501860657" lastFinishedPulling="2025-11-25 15:25:57.445044925 +0000 UTC m=+1916.192179355" observedRunningTime="2025-11-25 15:25:59.791534829 +0000 UTC m=+1918.538669249" watchObservedRunningTime="2025-11-25 15:25:59.798663285 +0000 UTC m=+1918.545797715" Nov 25 15:26:05 crc kubenswrapper[4731]: I1125 15:26:05.037757 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6z999"] Nov 25 15:26:05 crc kubenswrapper[4731]: I1125 15:26:05.047171 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-6z999"] Nov 25 15:26:06 crc kubenswrapper[4731]: I1125 15:26:06.823605 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceeba86d-0fd6-4a88-85ec-1ed6e079d413" path="/var/lib/kubelet/pods/ceeba86d-0fd6-4a88-85ec-1ed6e079d413/volumes" Nov 25 15:26:15 crc kubenswrapper[4731]: I1125 15:26:15.413009 4731 scope.go:117] "RemoveContainer" containerID="64549fa048be5631091a79e884073643b1bdffad6e6e55cb928dfe8f2552ad46" Nov 25 15:26:28 crc kubenswrapper[4731]: I1125 15:26:28.036578 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-h5455"] Nov 25 15:26:28 crc kubenswrapper[4731]: I1125 15:26:28.065017 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-h5455"] Nov 25 15:26:28 crc kubenswrapper[4731]: I1125 15:26:28.822830 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f671b8ea-21ca-4f84-a181-e98552324ee7" path="/var/lib/kubelet/pods/f671b8ea-21ca-4f84-a181-e98552324ee7/volumes" Nov 25 15:26:39 crc kubenswrapper[4731]: I1125 15:26:39.506390 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9jd8b"] Nov 25 15:26:39 crc kubenswrapper[4731]: I1125 15:26:39.508857 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:26:39 crc kubenswrapper[4731]: I1125 15:26:39.539902 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jd8b"] Nov 25 15:26:39 crc kubenswrapper[4731]: I1125 15:26:39.574913 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-utilities\") pod \"redhat-operators-9jd8b\" (UID: \"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75\") " pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:26:39 crc kubenswrapper[4731]: I1125 15:26:39.575477 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-catalog-content\") pod \"redhat-operators-9jd8b\" (UID: \"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75\") " pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:26:39 crc kubenswrapper[4731]: I1125 15:26:39.575629 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs6gm\" (UniqueName: \"kubernetes.io/projected/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-kube-api-access-qs6gm\") pod \"redhat-operators-9jd8b\" (UID: \"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75\") " pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:26:39 crc kubenswrapper[4731]: I1125 15:26:39.677478 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-utilities\") pod \"redhat-operators-9jd8b\" (UID: \"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75\") " pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:26:39 crc kubenswrapper[4731]: I1125 15:26:39.677612 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-catalog-content\") pod \"redhat-operators-9jd8b\" (UID: \"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75\") " pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:26:39 crc kubenswrapper[4731]: I1125 15:26:39.677700 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs6gm\" (UniqueName: \"kubernetes.io/projected/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-kube-api-access-qs6gm\") pod \"redhat-operators-9jd8b\" (UID: \"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75\") " pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:26:39 crc kubenswrapper[4731]: I1125 15:26:39.678119 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-catalog-content\") pod \"redhat-operators-9jd8b\" (UID: \"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75\") " pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:26:39 crc kubenswrapper[4731]: I1125 15:26:39.678153 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-utilities\") pod \"redhat-operators-9jd8b\" (UID: \"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75\") " pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:26:39 crc kubenswrapper[4731]: I1125 15:26:39.699218 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs6gm\" (UniqueName: \"kubernetes.io/projected/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-kube-api-access-qs6gm\") pod \"redhat-operators-9jd8b\" (UID: \"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75\") " pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:26:39 crc kubenswrapper[4731]: I1125 15:26:39.886036 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:26:40 crc kubenswrapper[4731]: I1125 15:26:40.148089 4731 generic.go:334] "Generic (PLEG): container finished" podID="026d447e-1194-4316-b484-6525a214f761" containerID="202642733a6411af15a51f6e11316299e2d8efd1e51a7cff3fc7e28e77cde80b" exitCode=0 Nov 25 15:26:40 crc kubenswrapper[4731]: I1125 15:26:40.148273 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" event={"ID":"026d447e-1194-4316-b484-6525a214f761","Type":"ContainerDied","Data":"202642733a6411af15a51f6e11316299e2d8efd1e51a7cff3fc7e28e77cde80b"} Nov 25 15:26:40 crc kubenswrapper[4731]: I1125 15:26:40.387321 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9jd8b"] Nov 25 15:26:41 crc kubenswrapper[4731]: I1125 15:26:41.158539 4731 generic.go:334] "Generic (PLEG): container finished" podID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerID="69644e914afaae1b8302a6806e1149eb8b6a9422e36e08b33eacce3cf7f12ccb" exitCode=0 Nov 25 15:26:41 crc kubenswrapper[4731]: I1125 15:26:41.158706 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jd8b" event={"ID":"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75","Type":"ContainerDied","Data":"69644e914afaae1b8302a6806e1149eb8b6a9422e36e08b33eacce3cf7f12ccb"} Nov 25 15:26:41 crc kubenswrapper[4731]: I1125 15:26:41.159094 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jd8b" event={"ID":"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75","Type":"ContainerStarted","Data":"cd8901c9a66c4400e064b83c9b6c8d9effa420f911d589d83a959f3c4fd425ec"} Nov 25 15:26:41 crc kubenswrapper[4731]: I1125 15:26:41.555005 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" Nov 25 15:26:41 crc kubenswrapper[4731]: I1125 15:26:41.619521 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lkj\" (UniqueName: \"kubernetes.io/projected/026d447e-1194-4316-b484-6525a214f761-kube-api-access-d4lkj\") pod \"026d447e-1194-4316-b484-6525a214f761\" (UID: \"026d447e-1194-4316-b484-6525a214f761\") " Nov 25 15:26:41 crc kubenswrapper[4731]: I1125 15:26:41.619625 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/026d447e-1194-4316-b484-6525a214f761-inventory\") pod \"026d447e-1194-4316-b484-6525a214f761\" (UID: \"026d447e-1194-4316-b484-6525a214f761\") " Nov 25 15:26:41 crc kubenswrapper[4731]: I1125 15:26:41.619843 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/026d447e-1194-4316-b484-6525a214f761-ssh-key\") pod \"026d447e-1194-4316-b484-6525a214f761\" (UID: \"026d447e-1194-4316-b484-6525a214f761\") " Nov 25 15:26:41 crc kubenswrapper[4731]: I1125 15:26:41.625687 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/026d447e-1194-4316-b484-6525a214f761-kube-api-access-d4lkj" (OuterVolumeSpecName: "kube-api-access-d4lkj") pod "026d447e-1194-4316-b484-6525a214f761" (UID: "026d447e-1194-4316-b484-6525a214f761"). InnerVolumeSpecName "kube-api-access-d4lkj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:26:41 crc kubenswrapper[4731]: I1125 15:26:41.647643 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026d447e-1194-4316-b484-6525a214f761-inventory" (OuterVolumeSpecName: "inventory") pod "026d447e-1194-4316-b484-6525a214f761" (UID: "026d447e-1194-4316-b484-6525a214f761"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:26:41 crc kubenswrapper[4731]: I1125 15:26:41.649043 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/026d447e-1194-4316-b484-6525a214f761-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "026d447e-1194-4316-b484-6525a214f761" (UID: "026d447e-1194-4316-b484-6525a214f761"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:26:41 crc kubenswrapper[4731]: I1125 15:26:41.722966 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/026d447e-1194-4316-b484-6525a214f761-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:26:41 crc kubenswrapper[4731]: I1125 15:26:41.723019 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lkj\" (UniqueName: \"kubernetes.io/projected/026d447e-1194-4316-b484-6525a214f761-kube-api-access-d4lkj\") on node \"crc\" DevicePath \"\"" Nov 25 15:26:41 crc kubenswrapper[4731]: I1125 15:26:41.723034 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/026d447e-1194-4316-b484-6525a214f761-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.169433 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.170221 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-lls57" event={"ID":"026d447e-1194-4316-b484-6525a214f761","Type":"ContainerDied","Data":"ec66dc51f879f130b46c8039267edd41667c8930ce6bc76d8c2157da4f36f9a0"} Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.170249 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec66dc51f879f130b46c8039267edd41667c8930ce6bc76d8c2157da4f36f9a0" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.247672 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd"] Nov 25 15:26:42 crc kubenswrapper[4731]: E1125 15:26:42.248092 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="026d447e-1194-4316-b484-6525a214f761" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.248116 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="026d447e-1194-4316-b484-6525a214f761" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.248371 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="026d447e-1194-4316-b484-6525a214f761" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.252436 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.255149 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.255154 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.255153 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.255194 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.259584 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd"] Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.334758 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zg54\" (UniqueName: \"kubernetes.io/projected/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-kube-api-access-7zg54\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd\" (UID: \"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.334843 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd\" (UID: \"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.334937 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd\" (UID: \"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.436753 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zg54\" (UniqueName: \"kubernetes.io/projected/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-kube-api-access-7zg54\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd\" (UID: \"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.436838 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd\" (UID: \"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.436892 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd\" (UID: \"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.441380 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd\" (UID: \"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.448526 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd\" (UID: \"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.452809 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zg54\" (UniqueName: \"kubernetes.io/projected/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-kube-api-access-7zg54\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd\" (UID: \"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" Nov 25 15:26:42 crc kubenswrapper[4731]: I1125 15:26:42.573891 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" Nov 25 15:26:43 crc kubenswrapper[4731]: I1125 15:26:43.095805 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd"] Nov 25 15:26:43 crc kubenswrapper[4731]: W1125 15:26:43.101087 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c7d1dc4_dbc1_4d90_a697_4c4600bb041e.slice/crio-c5d332a8c241a690ba6e97774c1f29556b6121158c4a04d924c1a28970004c17 WatchSource:0}: Error finding container c5d332a8c241a690ba6e97774c1f29556b6121158c4a04d924c1a28970004c17: Status 404 returned error can't find the container with id c5d332a8c241a690ba6e97774c1f29556b6121158c4a04d924c1a28970004c17 Nov 25 15:26:43 crc kubenswrapper[4731]: I1125 15:26:43.179464 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" event={"ID":"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e","Type":"ContainerStarted","Data":"c5d332a8c241a690ba6e97774c1f29556b6121158c4a04d924c1a28970004c17"} Nov 25 15:26:45 crc kubenswrapper[4731]: I1125 15:26:45.198361 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jd8b" event={"ID":"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75","Type":"ContainerStarted","Data":"4e6419ccd128c6387cfa72acaee04d88df07ea43dd2fbf32b8eeab621bf49ab0"} Nov 25 15:26:46 crc kubenswrapper[4731]: I1125 15:26:46.208503 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" event={"ID":"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e","Type":"ContainerStarted","Data":"c400d3d44da353ccd4e85d64fb052c729c91479726c3bf124387d5c95eb5ff9a"} Nov 25 15:26:46 crc kubenswrapper[4731]: I1125 15:26:46.237910 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" podStartSLOduration=1.928536774 podStartE2EDuration="4.237889221s" podCreationTimestamp="2025-11-25 15:26:42 +0000 UTC" firstStartedPulling="2025-11-25 15:26:43.103740219 +0000 UTC m=+1961.850874649" lastFinishedPulling="2025-11-25 15:26:45.413092666 +0000 UTC m=+1964.160227096" observedRunningTime="2025-11-25 15:26:46.228493409 +0000 UTC m=+1964.975627839" watchObservedRunningTime="2025-11-25 15:26:46.237889221 +0000 UTC m=+1964.985023651" Nov 25 15:26:56 crc kubenswrapper[4731]: I1125 15:26:56.045251 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nwdc4"] Nov 25 15:26:56 crc kubenswrapper[4731]: I1125 15:26:56.054893 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-nwdc4"] Nov 25 15:26:56 crc kubenswrapper[4731]: I1125 15:26:56.821786 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c717723d-912c-4e6d-add5-f7f1628cb069" path="/var/lib/kubelet/pods/c717723d-912c-4e6d-add5-f7f1628cb069/volumes" Nov 25 15:27:00 crc kubenswrapper[4731]: I1125 15:27:00.350243 4731 generic.go:334] "Generic (PLEG): container finished" podID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerID="4e6419ccd128c6387cfa72acaee04d88df07ea43dd2fbf32b8eeab621bf49ab0" exitCode=0 Nov 25 15:27:00 crc kubenswrapper[4731]: I1125 15:27:00.350309 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jd8b" event={"ID":"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75","Type":"ContainerDied","Data":"4e6419ccd128c6387cfa72acaee04d88df07ea43dd2fbf32b8eeab621bf49ab0"} Nov 25 15:27:02 crc kubenswrapper[4731]: I1125 15:27:02.369955 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jd8b" event={"ID":"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75","Type":"ContainerStarted","Data":"02c19cbc0f9b84885b39cd4369d0bea8e1a5225b767e37c8a67b5fa24c8b5942"} Nov 25 15:27:02 crc kubenswrapper[4731]: I1125 15:27:02.395092 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9jd8b" podStartSLOduration=4.266145322 podStartE2EDuration="23.395073608s" podCreationTimestamp="2025-11-25 15:26:39 +0000 UTC" firstStartedPulling="2025-11-25 15:26:42.171188772 +0000 UTC m=+1960.918323202" lastFinishedPulling="2025-11-25 15:27:01.300117028 +0000 UTC m=+1980.047251488" observedRunningTime="2025-11-25 15:27:02.388907476 +0000 UTC m=+1981.136041906" watchObservedRunningTime="2025-11-25 15:27:02.395073608 +0000 UTC m=+1981.142208038" Nov 25 15:27:09 crc kubenswrapper[4731]: I1125 15:27:09.886339 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:27:09 crc kubenswrapper[4731]: I1125 15:27:09.886696 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:27:10 crc kubenswrapper[4731]: I1125 15:27:10.938910 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jd8b" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerName="registry-server" probeResult="failure" output=< Nov 25 15:27:10 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 15:27:10 crc kubenswrapper[4731]: > Nov 25 15:27:15 crc kubenswrapper[4731]: I1125 15:27:15.502948 4731 scope.go:117] "RemoveContainer" containerID="77f20e12ba2b6aedc665350a94f95303a2f00f9c4da605142cabca833bc9658a" Nov 25 15:27:15 crc kubenswrapper[4731]: I1125 15:27:15.566784 4731 scope.go:117] "RemoveContainer" containerID="7827d5d55257d5f79e1ad8f9fe3086730c441b0ef766902faeabc9634ed9f89e" Nov 25 15:27:17 crc kubenswrapper[4731]: I1125 15:27:17.047397 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-d5dnp"] Nov 25 15:27:17 crc kubenswrapper[4731]: I1125 15:27:17.061765 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-d5dnp"] Nov 25 15:27:18 crc kubenswrapper[4731]: I1125 15:27:18.822760 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602f5438-8315-4458-a349-d02a2d0553be" path="/var/lib/kubelet/pods/602f5438-8315-4458-a349-d02a2d0553be/volumes" Nov 25 15:27:20 crc kubenswrapper[4731]: I1125 15:27:20.930830 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jd8b" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerName="registry-server" probeResult="failure" output=< Nov 25 15:27:20 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 15:27:20 crc kubenswrapper[4731]: > Nov 25 15:27:30 crc kubenswrapper[4731]: I1125 15:27:30.936645 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jd8b" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerName="registry-server" probeResult="failure" output=< Nov 25 15:27:30 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 15:27:30 crc kubenswrapper[4731]: > Nov 25 15:27:40 crc kubenswrapper[4731]: I1125 15:27:40.969233 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jd8b" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerName="registry-server" probeResult="failure" output=< Nov 25 15:27:40 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 15:27:40 crc kubenswrapper[4731]: > Nov 25 15:27:42 crc kubenswrapper[4731]: I1125 15:27:42.764421 4731 generic.go:334] "Generic (PLEG): container finished" podID="6c7d1dc4-dbc1-4d90-a697-4c4600bb041e" containerID="c400d3d44da353ccd4e85d64fb052c729c91479726c3bf124387d5c95eb5ff9a" exitCode=0 Nov 25 15:27:42 crc kubenswrapper[4731]: I1125 15:27:42.764533 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" event={"ID":"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e","Type":"ContainerDied","Data":"c400d3d44da353ccd4e85d64fb052c729c91479726c3bf124387d5c95eb5ff9a"} Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.252835 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.263855 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zg54\" (UniqueName: \"kubernetes.io/projected/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-kube-api-access-7zg54\") pod \"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e\" (UID: \"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e\") " Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.263907 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-ssh-key\") pod \"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e\" (UID: \"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e\") " Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.264075 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-inventory\") pod \"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e\" (UID: \"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e\") " Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.313345 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-kube-api-access-7zg54" (OuterVolumeSpecName: "kube-api-access-7zg54") pod "6c7d1dc4-dbc1-4d90-a697-4c4600bb041e" (UID: "6c7d1dc4-dbc1-4d90-a697-4c4600bb041e"). InnerVolumeSpecName "kube-api-access-7zg54". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.317794 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6c7d1dc4-dbc1-4d90-a697-4c4600bb041e" (UID: "6c7d1dc4-dbc1-4d90-a697-4c4600bb041e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.318972 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-inventory" (OuterVolumeSpecName: "inventory") pod "6c7d1dc4-dbc1-4d90-a697-4c4600bb041e" (UID: "6c7d1dc4-dbc1-4d90-a697-4c4600bb041e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.367194 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.367240 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zg54\" (UniqueName: \"kubernetes.io/projected/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-kube-api-access-7zg54\") on node \"crc\" DevicePath \"\"" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.367253 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6c7d1dc4-dbc1-4d90-a697-4c4600bb041e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.791373 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" event={"ID":"6c7d1dc4-dbc1-4d90-a697-4c4600bb041e","Type":"ContainerDied","Data":"c5d332a8c241a690ba6e97774c1f29556b6121158c4a04d924c1a28970004c17"} Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.791445 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.791456 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5d332a8c241a690ba6e97774c1f29556b6121158c4a04d924c1a28970004c17" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.868316 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sgtgr"] Nov 25 15:27:44 crc kubenswrapper[4731]: E1125 15:27:44.870314 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7d1dc4-dbc1-4d90-a697-4c4600bb041e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.870350 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7d1dc4-dbc1-4d90-a697-4c4600bb041e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.870683 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c7d1dc4-dbc1-4d90-a697-4c4600bb041e" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.871479 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.874208 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.874449 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.874523 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.875586 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.885703 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sgtgr"] Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.983647 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b6f2222b-0367-4b65-8481-bccefb346462-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sgtgr\" (UID: \"b6f2222b-0367-4b65-8481-bccefb346462\") " pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.983738 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk8rn\" (UniqueName: \"kubernetes.io/projected/b6f2222b-0367-4b65-8481-bccefb346462-kube-api-access-zk8rn\") pod \"ssh-known-hosts-edpm-deployment-sgtgr\" (UID: \"b6f2222b-0367-4b65-8481-bccefb346462\") " pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" Nov 25 15:27:44 crc kubenswrapper[4731]: I1125 15:27:44.983835 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6f2222b-0367-4b65-8481-bccefb346462-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sgtgr\" (UID: \"b6f2222b-0367-4b65-8481-bccefb346462\") " pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" Nov 25 15:27:45 crc kubenswrapper[4731]: I1125 15:27:45.085631 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b6f2222b-0367-4b65-8481-bccefb346462-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sgtgr\" (UID: \"b6f2222b-0367-4b65-8481-bccefb346462\") " pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" Nov 25 15:27:45 crc kubenswrapper[4731]: I1125 15:27:45.085731 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk8rn\" (UniqueName: \"kubernetes.io/projected/b6f2222b-0367-4b65-8481-bccefb346462-kube-api-access-zk8rn\") pod \"ssh-known-hosts-edpm-deployment-sgtgr\" (UID: \"b6f2222b-0367-4b65-8481-bccefb346462\") " pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" Nov 25 15:27:45 crc kubenswrapper[4731]: I1125 15:27:45.085853 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6f2222b-0367-4b65-8481-bccefb346462-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sgtgr\" (UID: \"b6f2222b-0367-4b65-8481-bccefb346462\") " pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" Nov 25 15:27:45 crc kubenswrapper[4731]: I1125 15:27:45.091767 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b6f2222b-0367-4b65-8481-bccefb346462-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-sgtgr\" (UID: \"b6f2222b-0367-4b65-8481-bccefb346462\") " pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" Nov 25 15:27:45 crc kubenswrapper[4731]: I1125 15:27:45.091772 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6f2222b-0367-4b65-8481-bccefb346462-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-sgtgr\" (UID: \"b6f2222b-0367-4b65-8481-bccefb346462\") " pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" Nov 25 15:27:45 crc kubenswrapper[4731]: I1125 15:27:45.109656 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk8rn\" (UniqueName: \"kubernetes.io/projected/b6f2222b-0367-4b65-8481-bccefb346462-kube-api-access-zk8rn\") pod \"ssh-known-hosts-edpm-deployment-sgtgr\" (UID: \"b6f2222b-0367-4b65-8481-bccefb346462\") " pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" Nov 25 15:27:45 crc kubenswrapper[4731]: I1125 15:27:45.191511 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" Nov 25 15:27:45 crc kubenswrapper[4731]: I1125 15:27:45.785503 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-sgtgr"] Nov 25 15:27:45 crc kubenswrapper[4731]: W1125 15:27:45.792018 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6f2222b_0367_4b65_8481_bccefb346462.slice/crio-931d73856c2d7d8ccde06f39e93046323f02649d472895727a96915ee55a21cb WatchSource:0}: Error finding container 931d73856c2d7d8ccde06f39e93046323f02649d472895727a96915ee55a21cb: Status 404 returned error can't find the container with id 931d73856c2d7d8ccde06f39e93046323f02649d472895727a96915ee55a21cb Nov 25 15:27:45 crc kubenswrapper[4731]: I1125 15:27:45.797520 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:27:46 crc kubenswrapper[4731]: I1125 15:27:46.828555 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" event={"ID":"b6f2222b-0367-4b65-8481-bccefb346462","Type":"ContainerStarted","Data":"931d73856c2d7d8ccde06f39e93046323f02649d472895727a96915ee55a21cb"} Nov 25 15:27:47 crc kubenswrapper[4731]: I1125 15:27:47.824008 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" event={"ID":"b6f2222b-0367-4b65-8481-bccefb346462","Type":"ContainerStarted","Data":"92de4715cfe278f9c870e41ad2e3edc07875d8d5deaf484341fb0cdd51089174"} Nov 25 15:27:47 crc kubenswrapper[4731]: I1125 15:27:47.854159 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" podStartSLOduration=3.020397812 podStartE2EDuration="3.854129347s" podCreationTimestamp="2025-11-25 15:27:44 +0000 UTC" firstStartedPulling="2025-11-25 15:27:45.797285948 +0000 UTC m=+2024.544420388" lastFinishedPulling="2025-11-25 15:27:46.631017493 +0000 UTC m=+2025.378151923" observedRunningTime="2025-11-25 15:27:47.846295093 +0000 UTC m=+2026.593429513" watchObservedRunningTime="2025-11-25 15:27:47.854129347 +0000 UTC m=+2026.601263777" Nov 25 15:27:50 crc kubenswrapper[4731]: I1125 15:27:50.040543 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:27:50 crc kubenswrapper[4731]: I1125 15:27:50.040980 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:27:50 crc kubenswrapper[4731]: I1125 15:27:50.940528 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jd8b" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerName="registry-server" probeResult="failure" output=< Nov 25 15:27:50 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 15:27:50 crc kubenswrapper[4731]: > Nov 25 15:27:55 crc kubenswrapper[4731]: I1125 15:27:55.927534 4731 generic.go:334] "Generic (PLEG): container finished" podID="b6f2222b-0367-4b65-8481-bccefb346462" containerID="92de4715cfe278f9c870e41ad2e3edc07875d8d5deaf484341fb0cdd51089174" exitCode=0 Nov 25 15:27:55 crc kubenswrapper[4731]: I1125 15:27:55.927677 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" event={"ID":"b6f2222b-0367-4b65-8481-bccefb346462","Type":"ContainerDied","Data":"92de4715cfe278f9c870e41ad2e3edc07875d8d5deaf484341fb0cdd51089174"} Nov 25 15:27:57 crc kubenswrapper[4731]: I1125 15:27:57.366083 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" Nov 25 15:27:57 crc kubenswrapper[4731]: I1125 15:27:57.476131 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zk8rn\" (UniqueName: \"kubernetes.io/projected/b6f2222b-0367-4b65-8481-bccefb346462-kube-api-access-zk8rn\") pod \"b6f2222b-0367-4b65-8481-bccefb346462\" (UID: \"b6f2222b-0367-4b65-8481-bccefb346462\") " Nov 25 15:27:57 crc kubenswrapper[4731]: I1125 15:27:57.476374 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b6f2222b-0367-4b65-8481-bccefb346462-inventory-0\") pod \"b6f2222b-0367-4b65-8481-bccefb346462\" (UID: \"b6f2222b-0367-4b65-8481-bccefb346462\") " Nov 25 15:27:57 crc kubenswrapper[4731]: I1125 15:27:57.477168 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6f2222b-0367-4b65-8481-bccefb346462-ssh-key-openstack-edpm-ipam\") pod \"b6f2222b-0367-4b65-8481-bccefb346462\" (UID: \"b6f2222b-0367-4b65-8481-bccefb346462\") " Nov 25 15:27:57 crc kubenswrapper[4731]: I1125 15:27:57.485457 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6f2222b-0367-4b65-8481-bccefb346462-kube-api-access-zk8rn" (OuterVolumeSpecName: "kube-api-access-zk8rn") pod "b6f2222b-0367-4b65-8481-bccefb346462" (UID: "b6f2222b-0367-4b65-8481-bccefb346462"). InnerVolumeSpecName "kube-api-access-zk8rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:27:57 crc kubenswrapper[4731]: I1125 15:27:57.509489 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f2222b-0367-4b65-8481-bccefb346462-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b6f2222b-0367-4b65-8481-bccefb346462" (UID: "b6f2222b-0367-4b65-8481-bccefb346462"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:27:57 crc kubenswrapper[4731]: I1125 15:27:57.509869 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f2222b-0367-4b65-8481-bccefb346462-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "b6f2222b-0367-4b65-8481-bccefb346462" (UID: "b6f2222b-0367-4b65-8481-bccefb346462"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:27:57 crc kubenswrapper[4731]: I1125 15:27:57.580159 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zk8rn\" (UniqueName: \"kubernetes.io/projected/b6f2222b-0367-4b65-8481-bccefb346462-kube-api-access-zk8rn\") on node \"crc\" DevicePath \"\"" Nov 25 15:27:57 crc kubenswrapper[4731]: I1125 15:27:57.580215 4731 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/b6f2222b-0367-4b65-8481-bccefb346462-inventory-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:27:57 crc kubenswrapper[4731]: I1125 15:27:57.580229 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b6f2222b-0367-4b65-8481-bccefb346462-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Nov 25 15:27:57 crc kubenswrapper[4731]: I1125 15:27:57.947774 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" event={"ID":"b6f2222b-0367-4b65-8481-bccefb346462","Type":"ContainerDied","Data":"931d73856c2d7d8ccde06f39e93046323f02649d472895727a96915ee55a21cb"} Nov 25 15:27:57 crc kubenswrapper[4731]: I1125 15:27:57.947850 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-sgtgr" Nov 25 15:27:57 crc kubenswrapper[4731]: I1125 15:27:57.947849 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="931d73856c2d7d8ccde06f39e93046323f02649d472895727a96915ee55a21cb" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.049668 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx"] Nov 25 15:27:58 crc kubenswrapper[4731]: E1125 15:27:58.050945 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6f2222b-0367-4b65-8481-bccefb346462" containerName="ssh-known-hosts-edpm-deployment" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.050978 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6f2222b-0367-4b65-8481-bccefb346462" containerName="ssh-known-hosts-edpm-deployment" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.051454 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6f2222b-0367-4b65-8481-bccefb346462" containerName="ssh-known-hosts-edpm-deployment" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.052588 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.055067 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.055352 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.056989 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx"] Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.057388 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.057627 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.091575 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bqnjx\" (UID: \"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.091622 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whmh6\" (UniqueName: \"kubernetes.io/projected/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-kube-api-access-whmh6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bqnjx\" (UID: \"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.091735 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bqnjx\" (UID: \"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.193528 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bqnjx\" (UID: \"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.193740 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bqnjx\" (UID: \"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.193768 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whmh6\" (UniqueName: \"kubernetes.io/projected/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-kube-api-access-whmh6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bqnjx\" (UID: \"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.197474 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bqnjx\" (UID: \"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.199487 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bqnjx\" (UID: \"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.212392 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whmh6\" (UniqueName: \"kubernetes.io/projected/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-kube-api-access-whmh6\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-bqnjx\" (UID: \"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" Nov 25 15:27:58 crc kubenswrapper[4731]: I1125 15:27:58.425074 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" Nov 25 15:27:59 crc kubenswrapper[4731]: I1125 15:27:59.095349 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx"] Nov 25 15:27:59 crc kubenswrapper[4731]: I1125 15:27:59.969386 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" event={"ID":"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543","Type":"ContainerStarted","Data":"a2c7a8c398720f631843c331b2032a1e72dd6be4d59437db0b8aa5c9b73f44c1"} Nov 25 15:28:00 crc kubenswrapper[4731]: I1125 15:28:00.955097 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jd8b" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerName="registry-server" probeResult="failure" output=< Nov 25 15:28:00 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 15:28:00 crc kubenswrapper[4731]: > Nov 25 15:28:00 crc kubenswrapper[4731]: I1125 15:28:00.980194 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" event={"ID":"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543","Type":"ContainerStarted","Data":"076549c62122600053fab1b90ceee7f0c78290c2058bd8cf8aab6b3ade716fcd"} Nov 25 15:28:01 crc kubenswrapper[4731]: I1125 15:28:01.005756 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" podStartSLOduration=2.05374593 podStartE2EDuration="3.005726642s" podCreationTimestamp="2025-11-25 15:27:58 +0000 UTC" firstStartedPulling="2025-11-25 15:27:59.099234285 +0000 UTC m=+2037.846368715" lastFinishedPulling="2025-11-25 15:28:00.051214997 +0000 UTC m=+2038.798349427" observedRunningTime="2025-11-25 15:28:00.997285023 +0000 UTC m=+2039.744419453" watchObservedRunningTime="2025-11-25 15:28:01.005726642 +0000 UTC m=+2039.752861072" Nov 25 15:28:09 crc kubenswrapper[4731]: I1125 15:28:09.103136 4731 generic.go:334] "Generic (PLEG): container finished" podID="ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543" containerID="076549c62122600053fab1b90ceee7f0c78290c2058bd8cf8aab6b3ade716fcd" exitCode=0 Nov 25 15:28:09 crc kubenswrapper[4731]: I1125 15:28:09.103237 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" event={"ID":"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543","Type":"ContainerDied","Data":"076549c62122600053fab1b90ceee7f0c78290c2058bd8cf8aab6b3ade716fcd"} Nov 25 15:28:10 crc kubenswrapper[4731]: I1125 15:28:10.530109 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" Nov 25 15:28:10 crc kubenswrapper[4731]: I1125 15:28:10.577804 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-inventory\") pod \"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543\" (UID: \"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543\") " Nov 25 15:28:10 crc kubenswrapper[4731]: I1125 15:28:10.578241 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-ssh-key\") pod \"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543\" (UID: \"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543\") " Nov 25 15:28:10 crc kubenswrapper[4731]: I1125 15:28:10.578283 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whmh6\" (UniqueName: \"kubernetes.io/projected/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-kube-api-access-whmh6\") pod \"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543\" (UID: \"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543\") " Nov 25 15:28:10 crc kubenswrapper[4731]: I1125 15:28:10.590116 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-kube-api-access-whmh6" (OuterVolumeSpecName: "kube-api-access-whmh6") pod "ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543" (UID: "ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543"). InnerVolumeSpecName "kube-api-access-whmh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:28:10 crc kubenswrapper[4731]: I1125 15:28:10.605461 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-inventory" (OuterVolumeSpecName: "inventory") pod "ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543" (UID: "ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:28:10 crc kubenswrapper[4731]: I1125 15:28:10.607764 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543" (UID: "ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:28:10 crc kubenswrapper[4731]: I1125 15:28:10.680393 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:28:10 crc kubenswrapper[4731]: I1125 15:28:10.680455 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:28:10 crc kubenswrapper[4731]: I1125 15:28:10.680468 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whmh6\" (UniqueName: \"kubernetes.io/projected/ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543-kube-api-access-whmh6\") on node \"crc\" DevicePath \"\"" Nov 25 15:28:10 crc kubenswrapper[4731]: I1125 15:28:10.932808 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jd8b" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerName="registry-server" probeResult="failure" output=< Nov 25 15:28:10 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 15:28:10 crc kubenswrapper[4731]: > Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.124330 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" event={"ID":"ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543","Type":"ContainerDied","Data":"a2c7a8c398720f631843c331b2032a1e72dd6be4d59437db0b8aa5c9b73f44c1"} Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.124577 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-bqnjx" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.124809 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2c7a8c398720f631843c331b2032a1e72dd6be4d59437db0b8aa5c9b73f44c1" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.210211 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh"] Nov 25 15:28:11 crc kubenswrapper[4731]: E1125 15:28:11.210986 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.211017 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.211304 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.212433 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.223584 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.224127 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.224285 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.224337 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh"] Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.224658 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.289606 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4eb1672c-88da-4d49-a0dc-6ddaabc48284-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh\" (UID: \"4eb1672c-88da-4d49-a0dc-6ddaabc48284\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.289709 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eb1672c-88da-4d49-a0dc-6ddaabc48284-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh\" (UID: \"4eb1672c-88da-4d49-a0dc-6ddaabc48284\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.290239 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb2cg\" (UniqueName: \"kubernetes.io/projected/4eb1672c-88da-4d49-a0dc-6ddaabc48284-kube-api-access-zb2cg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh\" (UID: \"4eb1672c-88da-4d49-a0dc-6ddaabc48284\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.392753 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb2cg\" (UniqueName: \"kubernetes.io/projected/4eb1672c-88da-4d49-a0dc-6ddaabc48284-kube-api-access-zb2cg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh\" (UID: \"4eb1672c-88da-4d49-a0dc-6ddaabc48284\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.392886 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4eb1672c-88da-4d49-a0dc-6ddaabc48284-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh\" (UID: \"4eb1672c-88da-4d49-a0dc-6ddaabc48284\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.392960 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eb1672c-88da-4d49-a0dc-6ddaabc48284-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh\" (UID: \"4eb1672c-88da-4d49-a0dc-6ddaabc48284\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.397422 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4eb1672c-88da-4d49-a0dc-6ddaabc48284-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh\" (UID: \"4eb1672c-88da-4d49-a0dc-6ddaabc48284\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.407043 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eb1672c-88da-4d49-a0dc-6ddaabc48284-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh\" (UID: \"4eb1672c-88da-4d49-a0dc-6ddaabc48284\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.419986 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb2cg\" (UniqueName: \"kubernetes.io/projected/4eb1672c-88da-4d49-a0dc-6ddaabc48284-kube-api-access-zb2cg\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh\" (UID: \"4eb1672c-88da-4d49-a0dc-6ddaabc48284\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" Nov 25 15:28:11 crc kubenswrapper[4731]: I1125 15:28:11.536340 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" Nov 25 15:28:12 crc kubenswrapper[4731]: I1125 15:28:12.096443 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh"] Nov 25 15:28:12 crc kubenswrapper[4731]: W1125 15:28:12.100633 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4eb1672c_88da_4d49_a0dc_6ddaabc48284.slice/crio-86c1e749ba2fd68a1ca7bf0f96724b8a1a48a8ddd2b112f60ea09f6170800253 WatchSource:0}: Error finding container 86c1e749ba2fd68a1ca7bf0f96724b8a1a48a8ddd2b112f60ea09f6170800253: Status 404 returned error can't find the container with id 86c1e749ba2fd68a1ca7bf0f96724b8a1a48a8ddd2b112f60ea09f6170800253 Nov 25 15:28:12 crc kubenswrapper[4731]: I1125 15:28:12.135569 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" event={"ID":"4eb1672c-88da-4d49-a0dc-6ddaabc48284","Type":"ContainerStarted","Data":"86c1e749ba2fd68a1ca7bf0f96724b8a1a48a8ddd2b112f60ea09f6170800253"} Nov 25 15:28:13 crc kubenswrapper[4731]: I1125 15:28:13.382170 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q84gz"] Nov 25 15:28:13 crc kubenswrapper[4731]: I1125 15:28:13.384597 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q84gz" Nov 25 15:28:13 crc kubenswrapper[4731]: I1125 15:28:13.407091 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q84gz"] Nov 25 15:28:13 crc kubenswrapper[4731]: I1125 15:28:13.433255 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sghjf\" (UniqueName: \"kubernetes.io/projected/01a573d4-5c1e-4ea1-a212-96afe26a6538-kube-api-access-sghjf\") pod \"community-operators-q84gz\" (UID: \"01a573d4-5c1e-4ea1-a212-96afe26a6538\") " pod="openshift-marketplace/community-operators-q84gz" Nov 25 15:28:13 crc kubenswrapper[4731]: I1125 15:28:13.433401 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01a573d4-5c1e-4ea1-a212-96afe26a6538-utilities\") pod \"community-operators-q84gz\" (UID: \"01a573d4-5c1e-4ea1-a212-96afe26a6538\") " pod="openshift-marketplace/community-operators-q84gz" Nov 25 15:28:13 crc kubenswrapper[4731]: I1125 15:28:13.433443 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01a573d4-5c1e-4ea1-a212-96afe26a6538-catalog-content\") pod \"community-operators-q84gz\" (UID: \"01a573d4-5c1e-4ea1-a212-96afe26a6538\") " pod="openshift-marketplace/community-operators-q84gz" Nov 25 15:28:13 crc kubenswrapper[4731]: I1125 15:28:13.535745 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01a573d4-5c1e-4ea1-a212-96afe26a6538-utilities\") pod \"community-operators-q84gz\" (UID: \"01a573d4-5c1e-4ea1-a212-96afe26a6538\") " pod="openshift-marketplace/community-operators-q84gz" Nov 25 15:28:13 crc kubenswrapper[4731]: I1125 15:28:13.535891 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01a573d4-5c1e-4ea1-a212-96afe26a6538-catalog-content\") pod \"community-operators-q84gz\" (UID: \"01a573d4-5c1e-4ea1-a212-96afe26a6538\") " pod="openshift-marketplace/community-operators-q84gz" Nov 25 15:28:13 crc kubenswrapper[4731]: I1125 15:28:13.535953 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sghjf\" (UniqueName: \"kubernetes.io/projected/01a573d4-5c1e-4ea1-a212-96afe26a6538-kube-api-access-sghjf\") pod \"community-operators-q84gz\" (UID: \"01a573d4-5c1e-4ea1-a212-96afe26a6538\") " pod="openshift-marketplace/community-operators-q84gz" Nov 25 15:28:13 crc kubenswrapper[4731]: I1125 15:28:13.536538 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/01a573d4-5c1e-4ea1-a212-96afe26a6538-utilities\") pod \"community-operators-q84gz\" (UID: \"01a573d4-5c1e-4ea1-a212-96afe26a6538\") " pod="openshift-marketplace/community-operators-q84gz" Nov 25 15:28:13 crc kubenswrapper[4731]: I1125 15:28:13.536899 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/01a573d4-5c1e-4ea1-a212-96afe26a6538-catalog-content\") pod \"community-operators-q84gz\" (UID: \"01a573d4-5c1e-4ea1-a212-96afe26a6538\") " pod="openshift-marketplace/community-operators-q84gz" Nov 25 15:28:13 crc kubenswrapper[4731]: I1125 15:28:13.552482 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sghjf\" (UniqueName: \"kubernetes.io/projected/01a573d4-5c1e-4ea1-a212-96afe26a6538-kube-api-access-sghjf\") pod \"community-operators-q84gz\" (UID: \"01a573d4-5c1e-4ea1-a212-96afe26a6538\") " pod="openshift-marketplace/community-operators-q84gz" Nov 25 15:28:13 crc kubenswrapper[4731]: I1125 15:28:13.744518 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q84gz" Nov 25 15:28:14 crc kubenswrapper[4731]: I1125 15:28:14.155985 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" event={"ID":"4eb1672c-88da-4d49-a0dc-6ddaabc48284","Type":"ContainerStarted","Data":"4a210dc580e80b684ca08260e0925192bd9a1519b109b3cf242eac194c078c4c"} Nov 25 15:28:14 crc kubenswrapper[4731]: I1125 15:28:14.182717 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" podStartSLOduration=1.852069861 podStartE2EDuration="3.182668335s" podCreationTimestamp="2025-11-25 15:28:11 +0000 UTC" firstStartedPulling="2025-11-25 15:28:12.104512788 +0000 UTC m=+2050.851647218" lastFinishedPulling="2025-11-25 15:28:13.435111262 +0000 UTC m=+2052.182245692" observedRunningTime="2025-11-25 15:28:14.171257903 +0000 UTC m=+2052.918392343" watchObservedRunningTime="2025-11-25 15:28:14.182668335 +0000 UTC m=+2052.929802775" Nov 25 15:28:14 crc kubenswrapper[4731]: I1125 15:28:14.297704 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q84gz"] Nov 25 15:28:14 crc kubenswrapper[4731]: W1125 15:28:14.301309 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01a573d4_5c1e_4ea1_a212_96afe26a6538.slice/crio-00f1ca1ad404300a66e561c926c96e1080a830d48eb72707383b0889d6124440 WatchSource:0}: Error finding container 00f1ca1ad404300a66e561c926c96e1080a830d48eb72707383b0889d6124440: Status 404 returned error can't find the container with id 00f1ca1ad404300a66e561c926c96e1080a830d48eb72707383b0889d6124440 Nov 25 15:28:15 crc kubenswrapper[4731]: I1125 15:28:15.167583 4731 generic.go:334] "Generic (PLEG): container finished" podID="01a573d4-5c1e-4ea1-a212-96afe26a6538" containerID="942d6751f1855ebe051dd1da017082cc07a5f8eb61f33ecb0ecd8855af0bf17a" exitCode=0 Nov 25 15:28:15 crc kubenswrapper[4731]: I1125 15:28:15.167802 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q84gz" event={"ID":"01a573d4-5c1e-4ea1-a212-96afe26a6538","Type":"ContainerDied","Data":"942d6751f1855ebe051dd1da017082cc07a5f8eb61f33ecb0ecd8855af0bf17a"} Nov 25 15:28:15 crc kubenswrapper[4731]: I1125 15:28:15.167950 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q84gz" event={"ID":"01a573d4-5c1e-4ea1-a212-96afe26a6538","Type":"ContainerStarted","Data":"00f1ca1ad404300a66e561c926c96e1080a830d48eb72707383b0889d6124440"} Nov 25 15:28:15 crc kubenswrapper[4731]: I1125 15:28:15.685428 4731 scope.go:117] "RemoveContainer" containerID="b38d592d9dc6999251c52315270c17392dd45e89b0cd88b9a6c2ff801f67408d" Nov 25 15:28:20 crc kubenswrapper[4731]: I1125 15:28:20.040232 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:28:20 crc kubenswrapper[4731]: I1125 15:28:20.040786 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:28:20 crc kubenswrapper[4731]: I1125 15:28:20.933976 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9jd8b" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerName="registry-server" probeResult="failure" output=< Nov 25 15:28:20 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 15:28:20 crc kubenswrapper[4731]: > Nov 25 15:28:22 crc kubenswrapper[4731]: I1125 15:28:22.270193 4731 generic.go:334] "Generic (PLEG): container finished" podID="01a573d4-5c1e-4ea1-a212-96afe26a6538" containerID="c798ee4550029e56fb0f6155610be5e972a23f509c9c6665745a2a70a74aaaa3" exitCode=0 Nov 25 15:28:22 crc kubenswrapper[4731]: I1125 15:28:22.270547 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q84gz" event={"ID":"01a573d4-5c1e-4ea1-a212-96afe26a6538","Type":"ContainerDied","Data":"c798ee4550029e56fb0f6155610be5e972a23f509c9c6665745a2a70a74aaaa3"} Nov 25 15:28:24 crc kubenswrapper[4731]: I1125 15:28:24.294204 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q84gz" event={"ID":"01a573d4-5c1e-4ea1-a212-96afe26a6538","Type":"ContainerStarted","Data":"ae9d9b75f58de3e367073226eb70f1c47ae240b2b9660c310697ab98db58f824"} Nov 25 15:28:24 crc kubenswrapper[4731]: I1125 15:28:24.316497 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q84gz" podStartSLOduration=3.390728946 podStartE2EDuration="11.316480377s" podCreationTimestamp="2025-11-25 15:28:13 +0000 UTC" firstStartedPulling="2025-11-25 15:28:15.170388372 +0000 UTC m=+2053.917522802" lastFinishedPulling="2025-11-25 15:28:23.096139803 +0000 UTC m=+2061.843274233" observedRunningTime="2025-11-25 15:28:24.312695164 +0000 UTC m=+2063.059829594" watchObservedRunningTime="2025-11-25 15:28:24.316480377 +0000 UTC m=+2063.063614797" Nov 25 15:28:25 crc kubenswrapper[4731]: I1125 15:28:25.306084 4731 generic.go:334] "Generic (PLEG): container finished" podID="4eb1672c-88da-4d49-a0dc-6ddaabc48284" containerID="4a210dc580e80b684ca08260e0925192bd9a1519b109b3cf242eac194c078c4c" exitCode=0 Nov 25 15:28:25 crc kubenswrapper[4731]: I1125 15:28:25.306208 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" event={"ID":"4eb1672c-88da-4d49-a0dc-6ddaabc48284","Type":"ContainerDied","Data":"4a210dc580e80b684ca08260e0925192bd9a1519b109b3cf242eac194c078c4c"} Nov 25 15:28:26 crc kubenswrapper[4731]: I1125 15:28:26.746911 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" Nov 25 15:28:26 crc kubenswrapper[4731]: I1125 15:28:26.908992 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb2cg\" (UniqueName: \"kubernetes.io/projected/4eb1672c-88da-4d49-a0dc-6ddaabc48284-kube-api-access-zb2cg\") pod \"4eb1672c-88da-4d49-a0dc-6ddaabc48284\" (UID: \"4eb1672c-88da-4d49-a0dc-6ddaabc48284\") " Nov 25 15:28:26 crc kubenswrapper[4731]: I1125 15:28:26.909183 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eb1672c-88da-4d49-a0dc-6ddaabc48284-inventory\") pod \"4eb1672c-88da-4d49-a0dc-6ddaabc48284\" (UID: \"4eb1672c-88da-4d49-a0dc-6ddaabc48284\") " Nov 25 15:28:26 crc kubenswrapper[4731]: I1125 15:28:26.909294 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4eb1672c-88da-4d49-a0dc-6ddaabc48284-ssh-key\") pod \"4eb1672c-88da-4d49-a0dc-6ddaabc48284\" (UID: \"4eb1672c-88da-4d49-a0dc-6ddaabc48284\") " Nov 25 15:28:26 crc kubenswrapper[4731]: I1125 15:28:26.920378 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4eb1672c-88da-4d49-a0dc-6ddaabc48284-kube-api-access-zb2cg" (OuterVolumeSpecName: "kube-api-access-zb2cg") pod "4eb1672c-88da-4d49-a0dc-6ddaabc48284" (UID: "4eb1672c-88da-4d49-a0dc-6ddaabc48284"). InnerVolumeSpecName "kube-api-access-zb2cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:28:26 crc kubenswrapper[4731]: I1125 15:28:26.938257 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb1672c-88da-4d49-a0dc-6ddaabc48284-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4eb1672c-88da-4d49-a0dc-6ddaabc48284" (UID: "4eb1672c-88da-4d49-a0dc-6ddaabc48284"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:28:26 crc kubenswrapper[4731]: I1125 15:28:26.958799 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4eb1672c-88da-4d49-a0dc-6ddaabc48284-inventory" (OuterVolumeSpecName: "inventory") pod "4eb1672c-88da-4d49-a0dc-6ddaabc48284" (UID: "4eb1672c-88da-4d49-a0dc-6ddaabc48284"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.012299 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zb2cg\" (UniqueName: \"kubernetes.io/projected/4eb1672c-88da-4d49-a0dc-6ddaabc48284-kube-api-access-zb2cg\") on node \"crc\" DevicePath \"\"" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.012343 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4eb1672c-88da-4d49-a0dc-6ddaabc48284-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.012353 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4eb1672c-88da-4d49-a0dc-6ddaabc48284-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.324426 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" event={"ID":"4eb1672c-88da-4d49-a0dc-6ddaabc48284","Type":"ContainerDied","Data":"86c1e749ba2fd68a1ca7bf0f96724b8a1a48a8ddd2b112f60ea09f6170800253"} Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.324473 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86c1e749ba2fd68a1ca7bf0f96724b8a1a48a8ddd2b112f60ea09f6170800253" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.324552 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.411148 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g"] Nov 25 15:28:27 crc kubenswrapper[4731]: E1125 15:28:27.411871 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4eb1672c-88da-4d49-a0dc-6ddaabc48284" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.411889 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4eb1672c-88da-4d49-a0dc-6ddaabc48284" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.412062 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="4eb1672c-88da-4d49-a0dc-6ddaabc48284" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.412717 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.415520 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.415722 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.415937 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.416331 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.416545 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.416688 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.416712 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.424050 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.429568 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g"] Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.522420 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.523571 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.523725 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.523843 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.523903 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.524098 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.524217 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npb88\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-kube-api-access-npb88\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.524311 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.524552 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.524686 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.524749 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.525016 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.525148 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.525234 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.626926 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.627018 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.627057 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.627080 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.627110 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.628158 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.628215 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.628300 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.628476 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.628512 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.628548 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.628583 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.628623 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.628670 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npb88\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-kube-api-access-npb88\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.631237 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.631253 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.631305 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.632154 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.632211 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.633030 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.633083 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.634860 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.635237 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.636312 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.638322 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.642541 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.643698 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.646590 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npb88\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-kube-api-access-npb88\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:27 crc kubenswrapper[4731]: I1125 15:28:27.731485 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:28:28 crc kubenswrapper[4731]: I1125 15:28:28.263637 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g"] Nov 25 15:28:28 crc kubenswrapper[4731]: I1125 15:28:28.341677 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" event={"ID":"deaabd87-d775-4f26-bf8e-c515967c2867","Type":"ContainerStarted","Data":"09c0ac86173a5ef6f023a8f681d158053de5666cb80ca00c0fac5644eb284532"} Nov 25 15:28:29 crc kubenswrapper[4731]: I1125 15:28:29.941263 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:28:29 crc kubenswrapper[4731]: I1125 15:28:29.996313 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:28:30 crc kubenswrapper[4731]: I1125 15:28:30.187760 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9jd8b"] Nov 25 15:28:31 crc kubenswrapper[4731]: I1125 15:28:31.367437 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9jd8b" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerName="registry-server" containerID="cri-o://02c19cbc0f9b84885b39cd4369d0bea8e1a5225b767e37c8a67b5fa24c8b5942" gracePeriod=2 Nov 25 15:28:31 crc kubenswrapper[4731]: I1125 15:28:31.368579 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" event={"ID":"deaabd87-d775-4f26-bf8e-c515967c2867","Type":"ContainerStarted","Data":"87da563aa69f332f3bbe21cda48f0b048bdfe34c7197f16fe7ee741f9798d75b"} Nov 25 15:28:31 crc kubenswrapper[4731]: I1125 15:28:31.402236 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" podStartSLOduration=1.900381962 podStartE2EDuration="4.402214604s" podCreationTimestamp="2025-11-25 15:28:27 +0000 UTC" firstStartedPulling="2025-11-25 15:28:28.270996894 +0000 UTC m=+2067.018131324" lastFinishedPulling="2025-11-25 15:28:30.772829536 +0000 UTC m=+2069.519963966" observedRunningTime="2025-11-25 15:28:31.387355616 +0000 UTC m=+2070.134490066" watchObservedRunningTime="2025-11-25 15:28:31.402214604 +0000 UTC m=+2070.149349034" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.350668 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.399313 4731 generic.go:334] "Generic (PLEG): container finished" podID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerID="02c19cbc0f9b84885b39cd4369d0bea8e1a5225b767e37c8a67b5fa24c8b5942" exitCode=0 Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.400184 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9jd8b" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.400648 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jd8b" event={"ID":"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75","Type":"ContainerDied","Data":"02c19cbc0f9b84885b39cd4369d0bea8e1a5225b767e37c8a67b5fa24c8b5942"} Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.400682 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9jd8b" event={"ID":"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75","Type":"ContainerDied","Data":"cd8901c9a66c4400e064b83c9b6c8d9effa420f911d589d83a959f3c4fd425ec"} Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.400698 4731 scope.go:117] "RemoveContainer" containerID="02c19cbc0f9b84885b39cd4369d0bea8e1a5225b767e37c8a67b5fa24c8b5942" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.430768 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-utilities\") pod \"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75\" (UID: \"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75\") " Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.430945 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-catalog-content\") pod \"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75\" (UID: \"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75\") " Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.431189 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs6gm\" (UniqueName: \"kubernetes.io/projected/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-kube-api-access-qs6gm\") pod \"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75\" (UID: \"1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75\") " Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.433658 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-utilities" (OuterVolumeSpecName: "utilities") pod "1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" (UID: "1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.434586 4731 scope.go:117] "RemoveContainer" containerID="4e6419ccd128c6387cfa72acaee04d88df07ea43dd2fbf32b8eeab621bf49ab0" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.438195 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-kube-api-access-qs6gm" (OuterVolumeSpecName: "kube-api-access-qs6gm") pod "1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" (UID: "1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75"). InnerVolumeSpecName "kube-api-access-qs6gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.489352 4731 scope.go:117] "RemoveContainer" containerID="69644e914afaae1b8302a6806e1149eb8b6a9422e36e08b33eacce3cf7f12ccb" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.511153 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" (UID: "1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.533318 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs6gm\" (UniqueName: \"kubernetes.io/projected/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-kube-api-access-qs6gm\") on node \"crc\" DevicePath \"\"" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.533601 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.533614 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.550354 4731 scope.go:117] "RemoveContainer" containerID="02c19cbc0f9b84885b39cd4369d0bea8e1a5225b767e37c8a67b5fa24c8b5942" Nov 25 15:28:32 crc kubenswrapper[4731]: E1125 15:28:32.550852 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02c19cbc0f9b84885b39cd4369d0bea8e1a5225b767e37c8a67b5fa24c8b5942\": container with ID starting with 02c19cbc0f9b84885b39cd4369d0bea8e1a5225b767e37c8a67b5fa24c8b5942 not found: ID does not exist" containerID="02c19cbc0f9b84885b39cd4369d0bea8e1a5225b767e37c8a67b5fa24c8b5942" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.550896 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02c19cbc0f9b84885b39cd4369d0bea8e1a5225b767e37c8a67b5fa24c8b5942"} err="failed to get container status \"02c19cbc0f9b84885b39cd4369d0bea8e1a5225b767e37c8a67b5fa24c8b5942\": rpc error: code = NotFound desc = could not find container \"02c19cbc0f9b84885b39cd4369d0bea8e1a5225b767e37c8a67b5fa24c8b5942\": container with ID starting with 02c19cbc0f9b84885b39cd4369d0bea8e1a5225b767e37c8a67b5fa24c8b5942 not found: ID does not exist" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.550921 4731 scope.go:117] "RemoveContainer" containerID="4e6419ccd128c6387cfa72acaee04d88df07ea43dd2fbf32b8eeab621bf49ab0" Nov 25 15:28:32 crc kubenswrapper[4731]: E1125 15:28:32.551345 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e6419ccd128c6387cfa72acaee04d88df07ea43dd2fbf32b8eeab621bf49ab0\": container with ID starting with 4e6419ccd128c6387cfa72acaee04d88df07ea43dd2fbf32b8eeab621bf49ab0 not found: ID does not exist" containerID="4e6419ccd128c6387cfa72acaee04d88df07ea43dd2fbf32b8eeab621bf49ab0" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.551377 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e6419ccd128c6387cfa72acaee04d88df07ea43dd2fbf32b8eeab621bf49ab0"} err="failed to get container status \"4e6419ccd128c6387cfa72acaee04d88df07ea43dd2fbf32b8eeab621bf49ab0\": rpc error: code = NotFound desc = could not find container \"4e6419ccd128c6387cfa72acaee04d88df07ea43dd2fbf32b8eeab621bf49ab0\": container with ID starting with 4e6419ccd128c6387cfa72acaee04d88df07ea43dd2fbf32b8eeab621bf49ab0 not found: ID does not exist" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.551395 4731 scope.go:117] "RemoveContainer" containerID="69644e914afaae1b8302a6806e1149eb8b6a9422e36e08b33eacce3cf7f12ccb" Nov 25 15:28:32 crc kubenswrapper[4731]: E1125 15:28:32.551614 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69644e914afaae1b8302a6806e1149eb8b6a9422e36e08b33eacce3cf7f12ccb\": container with ID starting with 69644e914afaae1b8302a6806e1149eb8b6a9422e36e08b33eacce3cf7f12ccb not found: ID does not exist" containerID="69644e914afaae1b8302a6806e1149eb8b6a9422e36e08b33eacce3cf7f12ccb" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.551642 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69644e914afaae1b8302a6806e1149eb8b6a9422e36e08b33eacce3cf7f12ccb"} err="failed to get container status \"69644e914afaae1b8302a6806e1149eb8b6a9422e36e08b33eacce3cf7f12ccb\": rpc error: code = NotFound desc = could not find container \"69644e914afaae1b8302a6806e1149eb8b6a9422e36e08b33eacce3cf7f12ccb\": container with ID starting with 69644e914afaae1b8302a6806e1149eb8b6a9422e36e08b33eacce3cf7f12ccb not found: ID does not exist" Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.752282 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9jd8b"] Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.763866 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9jd8b"] Nov 25 15:28:32 crc kubenswrapper[4731]: I1125 15:28:32.825424 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" path="/var/lib/kubelet/pods/1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75/volumes" Nov 25 15:28:33 crc kubenswrapper[4731]: I1125 15:28:33.745473 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q84gz" Nov 25 15:28:33 crc kubenswrapper[4731]: I1125 15:28:33.746410 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q84gz" Nov 25 15:28:33 crc kubenswrapper[4731]: I1125 15:28:33.791402 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q84gz" Nov 25 15:28:34 crc kubenswrapper[4731]: I1125 15:28:34.464186 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q84gz" Nov 25 15:28:35 crc kubenswrapper[4731]: I1125 15:28:35.427057 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q84gz"] Nov 25 15:28:35 crc kubenswrapper[4731]: I1125 15:28:35.581626 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f4fgc"] Nov 25 15:28:35 crc kubenswrapper[4731]: I1125 15:28:35.581887 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-f4fgc" podUID="3a5c583d-9a46-4ee6-81c0-21b971aa434c" containerName="registry-server" containerID="cri-o://d1a16f4e654c4b872d5f632174a6cd7d92c72b88e8b98220a970072618b83a5b" gracePeriod=2 Nov 25 15:28:36 crc kubenswrapper[4731]: E1125 15:28:36.276552 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d1a16f4e654c4b872d5f632174a6cd7d92c72b88e8b98220a970072618b83a5b" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 15:28:36 crc kubenswrapper[4731]: E1125 15:28:36.279392 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d1a16f4e654c4b872d5f632174a6cd7d92c72b88e8b98220a970072618b83a5b" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 15:28:36 crc kubenswrapper[4731]: E1125 15:28:36.281129 4731 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d1a16f4e654c4b872d5f632174a6cd7d92c72b88e8b98220a970072618b83a5b" cmd=["grpc_health_probe","-addr=:50051"] Nov 25 15:28:36 crc kubenswrapper[4731]: E1125 15:28:36.281175 4731 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/community-operators-f4fgc" podUID="3a5c583d-9a46-4ee6-81c0-21b971aa434c" containerName="registry-server" Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:39.466384 4731 generic.go:334] "Generic (PLEG): container finished" podID="3a5c583d-9a46-4ee6-81c0-21b971aa434c" containerID="d1a16f4e654c4b872d5f632174a6cd7d92c72b88e8b98220a970072618b83a5b" exitCode=0 Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:39.466450 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4fgc" event={"ID":"3a5c583d-9a46-4ee6-81c0-21b971aa434c","Type":"ContainerDied","Data":"d1a16f4e654c4b872d5f632174a6cd7d92c72b88e8b98220a970072618b83a5b"} Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:40.787454 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4fgc" Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:40.960843 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bsjm8\" (UniqueName: \"kubernetes.io/projected/3a5c583d-9a46-4ee6-81c0-21b971aa434c-kube-api-access-bsjm8\") pod \"3a5c583d-9a46-4ee6-81c0-21b971aa434c\" (UID: \"3a5c583d-9a46-4ee6-81c0-21b971aa434c\") " Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:40.960979 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c583d-9a46-4ee6-81c0-21b971aa434c-utilities\") pod \"3a5c583d-9a46-4ee6-81c0-21b971aa434c\" (UID: \"3a5c583d-9a46-4ee6-81c0-21b971aa434c\") " Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:40.961094 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c583d-9a46-4ee6-81c0-21b971aa434c-catalog-content\") pod \"3a5c583d-9a46-4ee6-81c0-21b971aa434c\" (UID: \"3a5c583d-9a46-4ee6-81c0-21b971aa434c\") " Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:40.961748 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5c583d-9a46-4ee6-81c0-21b971aa434c-utilities" (OuterVolumeSpecName: "utilities") pod "3a5c583d-9a46-4ee6-81c0-21b971aa434c" (UID: "3a5c583d-9a46-4ee6-81c0-21b971aa434c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:40.971567 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a5c583d-9a46-4ee6-81c0-21b971aa434c-kube-api-access-bsjm8" (OuterVolumeSpecName: "kube-api-access-bsjm8") pod "3a5c583d-9a46-4ee6-81c0-21b971aa434c" (UID: "3a5c583d-9a46-4ee6-81c0-21b971aa434c"). InnerVolumeSpecName "kube-api-access-bsjm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:41.014501 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a5c583d-9a46-4ee6-81c0-21b971aa434c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a5c583d-9a46-4ee6-81c0-21b971aa434c" (UID: "3a5c583d-9a46-4ee6-81c0-21b971aa434c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:41.063043 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a5c583d-9a46-4ee6-81c0-21b971aa434c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:41.063070 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bsjm8\" (UniqueName: \"kubernetes.io/projected/3a5c583d-9a46-4ee6-81c0-21b971aa434c-kube-api-access-bsjm8\") on node \"crc\" DevicePath \"\"" Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:41.063082 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a5c583d-9a46-4ee6-81c0-21b971aa434c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:41.492033 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-f4fgc" event={"ID":"3a5c583d-9a46-4ee6-81c0-21b971aa434c","Type":"ContainerDied","Data":"4f3157c87ae4a42388bdb810fa83b35696c5021f757bf545c27f16c739b7288a"} Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:41.492106 4731 scope.go:117] "RemoveContainer" containerID="d1a16f4e654c4b872d5f632174a6cd7d92c72b88e8b98220a970072618b83a5b" Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:41.492127 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-f4fgc" Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:41.545667 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-f4fgc"] Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:41.555837 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-f4fgc"] Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:41.608572 4731 scope.go:117] "RemoveContainer" containerID="5910fa899500bf45b0c48ba044dd2a163df87cab6225fbda735b4c522c23fddd" Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:41.714554 4731 scope.go:117] "RemoveContainer" containerID="f3a1e5fc87510c67c6a8592cad04a422cd73fedd7ff535768721cb34982af7e4" Nov 25 15:28:49 crc kubenswrapper[4731]: I1125 15:28:42.830427 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a5c583d-9a46-4ee6-81c0-21b971aa434c" path="/var/lib/kubelet/pods/3a5c583d-9a46-4ee6-81c0-21b971aa434c/volumes" Nov 25 15:28:50 crc kubenswrapper[4731]: I1125 15:28:50.039805 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:28:50 crc kubenswrapper[4731]: I1125 15:28:50.040287 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:28:50 crc kubenswrapper[4731]: I1125 15:28:50.040332 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 15:28:50 crc kubenswrapper[4731]: I1125 15:28:50.041057 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b1fa04dd595babcd00ffd658c2b29e3b04674535b213d4a578d43ab2b176f067"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:28:50 crc kubenswrapper[4731]: I1125 15:28:50.041115 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://b1fa04dd595babcd00ffd658c2b29e3b04674535b213d4a578d43ab2b176f067" gracePeriod=600 Nov 25 15:28:51 crc kubenswrapper[4731]: I1125 15:28:51.585001 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="b1fa04dd595babcd00ffd658c2b29e3b04674535b213d4a578d43ab2b176f067" exitCode=0 Nov 25 15:28:51 crc kubenswrapper[4731]: I1125 15:28:51.585803 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"b1fa04dd595babcd00ffd658c2b29e3b04674535b213d4a578d43ab2b176f067"} Nov 25 15:28:51 crc kubenswrapper[4731]: I1125 15:28:51.590109 4731 scope.go:117] "RemoveContainer" containerID="c47ca04faf812e720e1bb488294b9cdc816cc8cb86bebffd9a378f679f5c00a8" Nov 25 15:28:52 crc kubenswrapper[4731]: I1125 15:28:52.601761 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc"} Nov 25 15:29:19 crc kubenswrapper[4731]: I1125 15:29:19.855157 4731 generic.go:334] "Generic (PLEG): container finished" podID="deaabd87-d775-4f26-bf8e-c515967c2867" containerID="87da563aa69f332f3bbe21cda48f0b048bdfe34c7197f16fe7ee741f9798d75b" exitCode=0 Nov 25 15:29:19 crc kubenswrapper[4731]: I1125 15:29:19.855258 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" event={"ID":"deaabd87-d775-4f26-bf8e-c515967c2867","Type":"ContainerDied","Data":"87da563aa69f332f3bbe21cda48f0b048bdfe34c7197f16fe7ee741f9798d75b"} Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.313104 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.404400 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"deaabd87-d775-4f26-bf8e-c515967c2867\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.404511 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-ssh-key\") pod \"deaabd87-d775-4f26-bf8e-c515967c2867\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.404549 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-ovn-default-certs-0\") pod \"deaabd87-d775-4f26-bf8e-c515967c2867\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.404599 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-bootstrap-combined-ca-bundle\") pod \"deaabd87-d775-4f26-bf8e-c515967c2867\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.404642 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-inventory\") pod \"deaabd87-d775-4f26-bf8e-c515967c2867\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.404692 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-telemetry-combined-ca-bundle\") pod \"deaabd87-d775-4f26-bf8e-c515967c2867\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.404766 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"deaabd87-d775-4f26-bf8e-c515967c2867\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.404810 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-repo-setup-combined-ca-bundle\") pod \"deaabd87-d775-4f26-bf8e-c515967c2867\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.404872 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-nova-combined-ca-bundle\") pod \"deaabd87-d775-4f26-bf8e-c515967c2867\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.404899 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-neutron-metadata-combined-ca-bundle\") pod \"deaabd87-d775-4f26-bf8e-c515967c2867\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.404942 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npb88\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-kube-api-access-npb88\") pod \"deaabd87-d775-4f26-bf8e-c515967c2867\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.405052 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-ovn-combined-ca-bundle\") pod \"deaabd87-d775-4f26-bf8e-c515967c2867\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.405118 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"deaabd87-d775-4f26-bf8e-c515967c2867\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.405148 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-libvirt-combined-ca-bundle\") pod \"deaabd87-d775-4f26-bf8e-c515967c2867\" (UID: \"deaabd87-d775-4f26-bf8e-c515967c2867\") " Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.416564 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "deaabd87-d775-4f26-bf8e-c515967c2867" (UID: "deaabd87-d775-4f26-bf8e-c515967c2867"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.416773 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "deaabd87-d775-4f26-bf8e-c515967c2867" (UID: "deaabd87-d775-4f26-bf8e-c515967c2867"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.416940 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "deaabd87-d775-4f26-bf8e-c515967c2867" (UID: "deaabd87-d775-4f26-bf8e-c515967c2867"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.417301 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "deaabd87-d775-4f26-bf8e-c515967c2867" (UID: "deaabd87-d775-4f26-bf8e-c515967c2867"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.417753 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "deaabd87-d775-4f26-bf8e-c515967c2867" (UID: "deaabd87-d775-4f26-bf8e-c515967c2867"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.418756 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "deaabd87-d775-4f26-bf8e-c515967c2867" (UID: "deaabd87-d775-4f26-bf8e-c515967c2867"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.419968 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "deaabd87-d775-4f26-bf8e-c515967c2867" (UID: "deaabd87-d775-4f26-bf8e-c515967c2867"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.419978 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "deaabd87-d775-4f26-bf8e-c515967c2867" (UID: "deaabd87-d775-4f26-bf8e-c515967c2867"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.422457 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "deaabd87-d775-4f26-bf8e-c515967c2867" (UID: "deaabd87-d775-4f26-bf8e-c515967c2867"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.428301 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "deaabd87-d775-4f26-bf8e-c515967c2867" (UID: "deaabd87-d775-4f26-bf8e-c515967c2867"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.429044 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-kube-api-access-npb88" (OuterVolumeSpecName: "kube-api-access-npb88") pod "deaabd87-d775-4f26-bf8e-c515967c2867" (UID: "deaabd87-d775-4f26-bf8e-c515967c2867"). InnerVolumeSpecName "kube-api-access-npb88". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.442058 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "deaabd87-d775-4f26-bf8e-c515967c2867" (UID: "deaabd87-d775-4f26-bf8e-c515967c2867"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.449621 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-inventory" (OuterVolumeSpecName: "inventory") pod "deaabd87-d775-4f26-bf8e-c515967c2867" (UID: "deaabd87-d775-4f26-bf8e-c515967c2867"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.455056 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "deaabd87-d775-4f26-bf8e-c515967c2867" (UID: "deaabd87-d775-4f26-bf8e-c515967c2867"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.508596 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.508639 4731 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.508657 4731 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.508670 4731 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.508684 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npb88\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-kube-api-access-npb88\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.508699 4731 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.508714 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.508727 4731 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.508741 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.508754 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.508767 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/deaabd87-d775-4f26-bf8e-c515967c2867-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.508780 4731 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.508795 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.508807 4731 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/deaabd87-d775-4f26-bf8e-c515967c2867-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.878947 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" event={"ID":"deaabd87-d775-4f26-bf8e-c515967c2867","Type":"ContainerDied","Data":"09c0ac86173a5ef6f023a8f681d158053de5666cb80ca00c0fac5644eb284532"} Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.879280 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09c0ac86173a5ef6f023a8f681d158053de5666cb80ca00c0fac5644eb284532" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.879191 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.971897 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s"] Nov 25 15:29:21 crc kubenswrapper[4731]: E1125 15:29:21.972439 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerName="extract-content" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.972463 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerName="extract-content" Nov 25 15:29:21 crc kubenswrapper[4731]: E1125 15:29:21.972476 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deaabd87-d775-4f26-bf8e-c515967c2867" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.972486 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="deaabd87-d775-4f26-bf8e-c515967c2867" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 15:29:21 crc kubenswrapper[4731]: E1125 15:29:21.972504 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerName="registry-server" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.972510 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerName="registry-server" Nov 25 15:29:21 crc kubenswrapper[4731]: E1125 15:29:21.972533 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5c583d-9a46-4ee6-81c0-21b971aa434c" containerName="registry-server" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.972539 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5c583d-9a46-4ee6-81c0-21b971aa434c" containerName="registry-server" Nov 25 15:29:21 crc kubenswrapper[4731]: E1125 15:29:21.972550 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5c583d-9a46-4ee6-81c0-21b971aa434c" containerName="extract-utilities" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.972555 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5c583d-9a46-4ee6-81c0-21b971aa434c" containerName="extract-utilities" Nov 25 15:29:21 crc kubenswrapper[4731]: E1125 15:29:21.972569 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a5c583d-9a46-4ee6-81c0-21b971aa434c" containerName="extract-content" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.972576 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a5c583d-9a46-4ee6-81c0-21b971aa434c" containerName="extract-content" Nov 25 15:29:21 crc kubenswrapper[4731]: E1125 15:29:21.972597 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerName="extract-utilities" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.972603 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerName="extract-utilities" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.972780 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a5c583d-9a46-4ee6-81c0-21b971aa434c" containerName="registry-server" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.972800 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="deaabd87-d775-4f26-bf8e-c515967c2867" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.972837 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fc6d8e6-fcf8-4d26-bd65-2a4b0b888b75" containerName="registry-server" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.973543 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:21 crc kubenswrapper[4731]: I1125 15:29:21.984663 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s"] Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.012842 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.013300 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.015488 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.015835 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.016542 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.017804 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lks4s\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.018027 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lks4s\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.018106 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lks4s\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.018187 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnshj\" (UniqueName: \"kubernetes.io/projected/0b01dd33-8188-45e4-a53a-c01ea9df00f7-kube-api-access-gnshj\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lks4s\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.018301 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lks4s\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.120231 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lks4s\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.120309 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lks4s\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.120407 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lks4s\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.120433 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lks4s\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.120463 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnshj\" (UniqueName: \"kubernetes.io/projected/0b01dd33-8188-45e4-a53a-c01ea9df00f7-kube-api-access-gnshj\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lks4s\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.122083 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lks4s\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.126243 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lks4s\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.126474 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lks4s\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.127457 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lks4s\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.139885 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnshj\" (UniqueName: \"kubernetes.io/projected/0b01dd33-8188-45e4-a53a-c01ea9df00f7-kube-api-access-gnshj\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-lks4s\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.334092 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:29:22 crc kubenswrapper[4731]: I1125 15:29:22.879189 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s"] Nov 25 15:29:23 crc kubenswrapper[4731]: I1125 15:29:23.899021 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" event={"ID":"0b01dd33-8188-45e4-a53a-c01ea9df00f7","Type":"ContainerStarted","Data":"ca198c9c479890ba0d4ce77f108e1bfda1a6510d1a86024802856872df51f26d"} Nov 25 15:29:24 crc kubenswrapper[4731]: I1125 15:29:24.912296 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" event={"ID":"0b01dd33-8188-45e4-a53a-c01ea9df00f7","Type":"ContainerStarted","Data":"e72a73428fac859c375fe0b7674410597d14a28b3c4f9301fb6342445ac169fd"} Nov 25 15:29:24 crc kubenswrapper[4731]: I1125 15:29:24.942187 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" podStartSLOduration=2.868841327 podStartE2EDuration="3.942164683s" podCreationTimestamp="2025-11-25 15:29:21 +0000 UTC" firstStartedPulling="2025-11-25 15:29:22.88318308 +0000 UTC m=+2121.630317510" lastFinishedPulling="2025-11-25 15:29:23.956506436 +0000 UTC m=+2122.703640866" observedRunningTime="2025-11-25 15:29:24.934260517 +0000 UTC m=+2123.681394967" watchObservedRunningTime="2025-11-25 15:29:24.942164683 +0000 UTC m=+2123.689299113" Nov 25 15:30:00 crc kubenswrapper[4731]: I1125 15:30:00.162142 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2"] Nov 25 15:30:00 crc kubenswrapper[4731]: I1125 15:30:00.164442 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2" Nov 25 15:30:00 crc kubenswrapper[4731]: I1125 15:30:00.170698 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 15:30:00 crc kubenswrapper[4731]: I1125 15:30:00.170752 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 15:30:00 crc kubenswrapper[4731]: I1125 15:30:00.171241 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2"] Nov 25 15:30:00 crc kubenswrapper[4731]: I1125 15:30:00.307624 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfvb4\" (UniqueName: \"kubernetes.io/projected/f5874b76-c567-4ef3-ad1c-03ba10a554a4-kube-api-access-kfvb4\") pod \"collect-profiles-29401410-w9vw2\" (UID: \"f5874b76-c567-4ef3-ad1c-03ba10a554a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2" Nov 25 15:30:00 crc kubenswrapper[4731]: I1125 15:30:00.308059 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5874b76-c567-4ef3-ad1c-03ba10a554a4-secret-volume\") pod \"collect-profiles-29401410-w9vw2\" (UID: \"f5874b76-c567-4ef3-ad1c-03ba10a554a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2" Nov 25 15:30:00 crc kubenswrapper[4731]: I1125 15:30:00.308151 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5874b76-c567-4ef3-ad1c-03ba10a554a4-config-volume\") pod \"collect-profiles-29401410-w9vw2\" (UID: \"f5874b76-c567-4ef3-ad1c-03ba10a554a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2" Nov 25 15:30:00 crc kubenswrapper[4731]: I1125 15:30:00.409798 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfvb4\" (UniqueName: \"kubernetes.io/projected/f5874b76-c567-4ef3-ad1c-03ba10a554a4-kube-api-access-kfvb4\") pod \"collect-profiles-29401410-w9vw2\" (UID: \"f5874b76-c567-4ef3-ad1c-03ba10a554a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2" Nov 25 15:30:00 crc kubenswrapper[4731]: I1125 15:30:00.409970 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5874b76-c567-4ef3-ad1c-03ba10a554a4-secret-volume\") pod \"collect-profiles-29401410-w9vw2\" (UID: \"f5874b76-c567-4ef3-ad1c-03ba10a554a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2" Nov 25 15:30:00 crc kubenswrapper[4731]: I1125 15:30:00.410071 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5874b76-c567-4ef3-ad1c-03ba10a554a4-config-volume\") pod \"collect-profiles-29401410-w9vw2\" (UID: \"f5874b76-c567-4ef3-ad1c-03ba10a554a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2" Nov 25 15:30:00 crc kubenswrapper[4731]: I1125 15:30:00.411448 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5874b76-c567-4ef3-ad1c-03ba10a554a4-config-volume\") pod \"collect-profiles-29401410-w9vw2\" (UID: \"f5874b76-c567-4ef3-ad1c-03ba10a554a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2" Nov 25 15:30:00 crc kubenswrapper[4731]: I1125 15:30:00.424959 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5874b76-c567-4ef3-ad1c-03ba10a554a4-secret-volume\") pod \"collect-profiles-29401410-w9vw2\" (UID: \"f5874b76-c567-4ef3-ad1c-03ba10a554a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2" Nov 25 15:30:00 crc kubenswrapper[4731]: I1125 15:30:00.430541 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfvb4\" (UniqueName: \"kubernetes.io/projected/f5874b76-c567-4ef3-ad1c-03ba10a554a4-kube-api-access-kfvb4\") pod \"collect-profiles-29401410-w9vw2\" (UID: \"f5874b76-c567-4ef3-ad1c-03ba10a554a4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2" Nov 25 15:30:00 crc kubenswrapper[4731]: I1125 15:30:00.496363 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2" Nov 25 15:30:00 crc kubenswrapper[4731]: I1125 15:30:00.958518 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2"] Nov 25 15:30:01 crc kubenswrapper[4731]: I1125 15:30:01.233301 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2" event={"ID":"f5874b76-c567-4ef3-ad1c-03ba10a554a4","Type":"ContainerStarted","Data":"d16205da9f8d6ba662de89273b1d6936b4529bd0d427292ce7e96a9d8a20aca7"} Nov 25 15:30:02 crc kubenswrapper[4731]: I1125 15:30:02.243312 4731 generic.go:334] "Generic (PLEG): container finished" podID="f5874b76-c567-4ef3-ad1c-03ba10a554a4" containerID="127695bac3f905a91605f29e4da058855025b1b9f76bb69349465ad7b18abe6e" exitCode=0 Nov 25 15:30:02 crc kubenswrapper[4731]: I1125 15:30:02.243378 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2" event={"ID":"f5874b76-c567-4ef3-ad1c-03ba10a554a4","Type":"ContainerDied","Data":"127695bac3f905a91605f29e4da058855025b1b9f76bb69349465ad7b18abe6e"} Nov 25 15:30:03 crc kubenswrapper[4731]: I1125 15:30:03.660658 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2" Nov 25 15:30:03 crc kubenswrapper[4731]: I1125 15:30:03.796186 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5874b76-c567-4ef3-ad1c-03ba10a554a4-config-volume\") pod \"f5874b76-c567-4ef3-ad1c-03ba10a554a4\" (UID: \"f5874b76-c567-4ef3-ad1c-03ba10a554a4\") " Nov 25 15:30:03 crc kubenswrapper[4731]: I1125 15:30:03.796355 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5874b76-c567-4ef3-ad1c-03ba10a554a4-secret-volume\") pod \"f5874b76-c567-4ef3-ad1c-03ba10a554a4\" (UID: \"f5874b76-c567-4ef3-ad1c-03ba10a554a4\") " Nov 25 15:30:03 crc kubenswrapper[4731]: I1125 15:30:03.796514 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfvb4\" (UniqueName: \"kubernetes.io/projected/f5874b76-c567-4ef3-ad1c-03ba10a554a4-kube-api-access-kfvb4\") pod \"f5874b76-c567-4ef3-ad1c-03ba10a554a4\" (UID: \"f5874b76-c567-4ef3-ad1c-03ba10a554a4\") " Nov 25 15:30:03 crc kubenswrapper[4731]: I1125 15:30:03.796930 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5874b76-c567-4ef3-ad1c-03ba10a554a4-config-volume" (OuterVolumeSpecName: "config-volume") pod "f5874b76-c567-4ef3-ad1c-03ba10a554a4" (UID: "f5874b76-c567-4ef3-ad1c-03ba10a554a4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:30:03 crc kubenswrapper[4731]: I1125 15:30:03.797094 4731 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5874b76-c567-4ef3-ad1c-03ba10a554a4-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:03 crc kubenswrapper[4731]: I1125 15:30:03.806070 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5874b76-c567-4ef3-ad1c-03ba10a554a4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f5874b76-c567-4ef3-ad1c-03ba10a554a4" (UID: "f5874b76-c567-4ef3-ad1c-03ba10a554a4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:30:03 crc kubenswrapper[4731]: I1125 15:30:03.806120 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5874b76-c567-4ef3-ad1c-03ba10a554a4-kube-api-access-kfvb4" (OuterVolumeSpecName: "kube-api-access-kfvb4") pod "f5874b76-c567-4ef3-ad1c-03ba10a554a4" (UID: "f5874b76-c567-4ef3-ad1c-03ba10a554a4"). InnerVolumeSpecName "kube-api-access-kfvb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:30:03 crc kubenswrapper[4731]: I1125 15:30:03.899106 4731 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f5874b76-c567-4ef3-ad1c-03ba10a554a4-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:03 crc kubenswrapper[4731]: I1125 15:30:03.899156 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfvb4\" (UniqueName: \"kubernetes.io/projected/f5874b76-c567-4ef3-ad1c-03ba10a554a4-kube-api-access-kfvb4\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:04 crc kubenswrapper[4731]: I1125 15:30:04.263064 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2" event={"ID":"f5874b76-c567-4ef3-ad1c-03ba10a554a4","Type":"ContainerDied","Data":"d16205da9f8d6ba662de89273b1d6936b4529bd0d427292ce7e96a9d8a20aca7"} Nov 25 15:30:04 crc kubenswrapper[4731]: I1125 15:30:04.263112 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d16205da9f8d6ba662de89273b1d6936b4529bd0d427292ce7e96a9d8a20aca7" Nov 25 15:30:04 crc kubenswrapper[4731]: I1125 15:30:04.263172 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2" Nov 25 15:30:04 crc kubenswrapper[4731]: I1125 15:30:04.752660 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr"] Nov 25 15:30:04 crc kubenswrapper[4731]: I1125 15:30:04.763842 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401365-x2qcr"] Nov 25 15:30:04 crc kubenswrapper[4731]: I1125 15:30:04.822613 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8380637d-e32d-4018-8081-28b3076fad29" path="/var/lib/kubelet/pods/8380637d-e32d-4018-8081-28b3076fad29/volumes" Nov 25 15:30:12 crc kubenswrapper[4731]: I1125 15:30:12.737668 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rzrvh"] Nov 25 15:30:12 crc kubenswrapper[4731]: E1125 15:30:12.738680 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5874b76-c567-4ef3-ad1c-03ba10a554a4" containerName="collect-profiles" Nov 25 15:30:12 crc kubenswrapper[4731]: I1125 15:30:12.738696 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5874b76-c567-4ef3-ad1c-03ba10a554a4" containerName="collect-profiles" Nov 25 15:30:12 crc kubenswrapper[4731]: I1125 15:30:12.738959 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5874b76-c567-4ef3-ad1c-03ba10a554a4" containerName="collect-profiles" Nov 25 15:30:12 crc kubenswrapper[4731]: I1125 15:30:12.740407 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:12 crc kubenswrapper[4731]: I1125 15:30:12.771496 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rzrvh"] Nov 25 15:30:12 crc kubenswrapper[4731]: I1125 15:30:12.894193 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112fa3eb-aef8-4893-a054-1e6730c704eb-utilities\") pod \"certified-operators-rzrvh\" (UID: \"112fa3eb-aef8-4893-a054-1e6730c704eb\") " pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:12 crc kubenswrapper[4731]: I1125 15:30:12.894417 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112fa3eb-aef8-4893-a054-1e6730c704eb-catalog-content\") pod \"certified-operators-rzrvh\" (UID: \"112fa3eb-aef8-4893-a054-1e6730c704eb\") " pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:12 crc kubenswrapper[4731]: I1125 15:30:12.894541 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nx4r\" (UniqueName: \"kubernetes.io/projected/112fa3eb-aef8-4893-a054-1e6730c704eb-kube-api-access-4nx4r\") pod \"certified-operators-rzrvh\" (UID: \"112fa3eb-aef8-4893-a054-1e6730c704eb\") " pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:12 crc kubenswrapper[4731]: I1125 15:30:12.996748 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112fa3eb-aef8-4893-a054-1e6730c704eb-catalog-content\") pod \"certified-operators-rzrvh\" (UID: \"112fa3eb-aef8-4893-a054-1e6730c704eb\") " pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:12 crc kubenswrapper[4731]: I1125 15:30:12.996935 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nx4r\" (UniqueName: \"kubernetes.io/projected/112fa3eb-aef8-4893-a054-1e6730c704eb-kube-api-access-4nx4r\") pod \"certified-operators-rzrvh\" (UID: \"112fa3eb-aef8-4893-a054-1e6730c704eb\") " pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:12 crc kubenswrapper[4731]: I1125 15:30:12.997021 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112fa3eb-aef8-4893-a054-1e6730c704eb-utilities\") pod \"certified-operators-rzrvh\" (UID: \"112fa3eb-aef8-4893-a054-1e6730c704eb\") " pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:12 crc kubenswrapper[4731]: I1125 15:30:12.998237 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112fa3eb-aef8-4893-a054-1e6730c704eb-catalog-content\") pod \"certified-operators-rzrvh\" (UID: \"112fa3eb-aef8-4893-a054-1e6730c704eb\") " pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:12 crc kubenswrapper[4731]: I1125 15:30:12.998847 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112fa3eb-aef8-4893-a054-1e6730c704eb-utilities\") pod \"certified-operators-rzrvh\" (UID: \"112fa3eb-aef8-4893-a054-1e6730c704eb\") " pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:13 crc kubenswrapper[4731]: I1125 15:30:13.026997 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nx4r\" (UniqueName: \"kubernetes.io/projected/112fa3eb-aef8-4893-a054-1e6730c704eb-kube-api-access-4nx4r\") pod \"certified-operators-rzrvh\" (UID: \"112fa3eb-aef8-4893-a054-1e6730c704eb\") " pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:13 crc kubenswrapper[4731]: I1125 15:30:13.064342 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:13 crc kubenswrapper[4731]: I1125 15:30:13.652228 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rzrvh"] Nov 25 15:30:14 crc kubenswrapper[4731]: I1125 15:30:14.358228 4731 generic.go:334] "Generic (PLEG): container finished" podID="112fa3eb-aef8-4893-a054-1e6730c704eb" containerID="6e184acee33759fa7c32618535cc6b2bdb0bdbc2f00eb6bd01ff0d9d9c3f23fb" exitCode=0 Nov 25 15:30:14 crc kubenswrapper[4731]: I1125 15:30:14.358524 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rzrvh" event={"ID":"112fa3eb-aef8-4893-a054-1e6730c704eb","Type":"ContainerDied","Data":"6e184acee33759fa7c32618535cc6b2bdb0bdbc2f00eb6bd01ff0d9d9c3f23fb"} Nov 25 15:30:14 crc kubenswrapper[4731]: I1125 15:30:14.358552 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rzrvh" event={"ID":"112fa3eb-aef8-4893-a054-1e6730c704eb","Type":"ContainerStarted","Data":"b94f865d4b8957119151059c522246005178638f4b57686b9ff6ec89f36c014e"} Nov 25 15:30:15 crc kubenswrapper[4731]: I1125 15:30:15.927387 4731 scope.go:117] "RemoveContainer" containerID="3f29d25ea98a8a078e02e86fe0303d3b903e873ea82880d9c7a7a06d31824d4e" Nov 25 15:30:16 crc kubenswrapper[4731]: I1125 15:30:16.383374 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rzrvh" event={"ID":"112fa3eb-aef8-4893-a054-1e6730c704eb","Type":"ContainerStarted","Data":"485aad6900a56fa973a19388f190830b6f99d8fa5b76dbdcd0ee6030af2c1bc2"} Nov 25 15:30:17 crc kubenswrapper[4731]: I1125 15:30:17.399199 4731 generic.go:334] "Generic (PLEG): container finished" podID="112fa3eb-aef8-4893-a054-1e6730c704eb" containerID="485aad6900a56fa973a19388f190830b6f99d8fa5b76dbdcd0ee6030af2c1bc2" exitCode=0 Nov 25 15:30:17 crc kubenswrapper[4731]: I1125 15:30:17.399503 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rzrvh" event={"ID":"112fa3eb-aef8-4893-a054-1e6730c704eb","Type":"ContainerDied","Data":"485aad6900a56fa973a19388f190830b6f99d8fa5b76dbdcd0ee6030af2c1bc2"} Nov 25 15:30:18 crc kubenswrapper[4731]: I1125 15:30:18.415866 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rzrvh" event={"ID":"112fa3eb-aef8-4893-a054-1e6730c704eb","Type":"ContainerStarted","Data":"26a4b1a6e72fd274b679c6d9b24720a77890687de37ea26158be4d8aaef16d45"} Nov 25 15:30:18 crc kubenswrapper[4731]: I1125 15:30:18.442293 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rzrvh" podStartSLOduration=2.92746011 podStartE2EDuration="6.442269495s" podCreationTimestamp="2025-11-25 15:30:12 +0000 UTC" firstStartedPulling="2025-11-25 15:30:14.363172613 +0000 UTC m=+2173.110307043" lastFinishedPulling="2025-11-25 15:30:17.877981998 +0000 UTC m=+2176.625116428" observedRunningTime="2025-11-25 15:30:18.434372419 +0000 UTC m=+2177.181506849" watchObservedRunningTime="2025-11-25 15:30:18.442269495 +0000 UTC m=+2177.189403925" Nov 25 15:30:23 crc kubenswrapper[4731]: I1125 15:30:23.065063 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:23 crc kubenswrapper[4731]: I1125 15:30:23.065472 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:23 crc kubenswrapper[4731]: I1125 15:30:23.120233 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:23 crc kubenswrapper[4731]: I1125 15:30:23.527446 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:27 crc kubenswrapper[4731]: I1125 15:30:27.714795 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rzrvh"] Nov 25 15:30:27 crc kubenswrapper[4731]: I1125 15:30:27.715475 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rzrvh" podUID="112fa3eb-aef8-4893-a054-1e6730c704eb" containerName="registry-server" containerID="cri-o://26a4b1a6e72fd274b679c6d9b24720a77890687de37ea26158be4d8aaef16d45" gracePeriod=2 Nov 25 15:30:28 crc kubenswrapper[4731]: I1125 15:30:28.531141 4731 generic.go:334] "Generic (PLEG): container finished" podID="112fa3eb-aef8-4893-a054-1e6730c704eb" containerID="26a4b1a6e72fd274b679c6d9b24720a77890687de37ea26158be4d8aaef16d45" exitCode=0 Nov 25 15:30:28 crc kubenswrapper[4731]: I1125 15:30:28.531212 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rzrvh" event={"ID":"112fa3eb-aef8-4893-a054-1e6730c704eb","Type":"ContainerDied","Data":"26a4b1a6e72fd274b679c6d9b24720a77890687de37ea26158be4d8aaef16d45"} Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.323009 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.395858 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112fa3eb-aef8-4893-a054-1e6730c704eb-utilities\") pod \"112fa3eb-aef8-4893-a054-1e6730c704eb\" (UID: \"112fa3eb-aef8-4893-a054-1e6730c704eb\") " Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.396040 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112fa3eb-aef8-4893-a054-1e6730c704eb-catalog-content\") pod \"112fa3eb-aef8-4893-a054-1e6730c704eb\" (UID: \"112fa3eb-aef8-4893-a054-1e6730c704eb\") " Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.396182 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nx4r\" (UniqueName: \"kubernetes.io/projected/112fa3eb-aef8-4893-a054-1e6730c704eb-kube-api-access-4nx4r\") pod \"112fa3eb-aef8-4893-a054-1e6730c704eb\" (UID: \"112fa3eb-aef8-4893-a054-1e6730c704eb\") " Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.397978 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/112fa3eb-aef8-4893-a054-1e6730c704eb-utilities" (OuterVolumeSpecName: "utilities") pod "112fa3eb-aef8-4893-a054-1e6730c704eb" (UID: "112fa3eb-aef8-4893-a054-1e6730c704eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.402523 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/112fa3eb-aef8-4893-a054-1e6730c704eb-kube-api-access-4nx4r" (OuterVolumeSpecName: "kube-api-access-4nx4r") pod "112fa3eb-aef8-4893-a054-1e6730c704eb" (UID: "112fa3eb-aef8-4893-a054-1e6730c704eb"). InnerVolumeSpecName "kube-api-access-4nx4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.448317 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/112fa3eb-aef8-4893-a054-1e6730c704eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "112fa3eb-aef8-4893-a054-1e6730c704eb" (UID: "112fa3eb-aef8-4893-a054-1e6730c704eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.500836 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/112fa3eb-aef8-4893-a054-1e6730c704eb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.500874 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nx4r\" (UniqueName: \"kubernetes.io/projected/112fa3eb-aef8-4893-a054-1e6730c704eb-kube-api-access-4nx4r\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.500890 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/112fa3eb-aef8-4893-a054-1e6730c704eb-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.544197 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rzrvh" event={"ID":"112fa3eb-aef8-4893-a054-1e6730c704eb","Type":"ContainerDied","Data":"b94f865d4b8957119151059c522246005178638f4b57686b9ff6ec89f36c014e"} Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.544259 4731 scope.go:117] "RemoveContainer" containerID="26a4b1a6e72fd274b679c6d9b24720a77890687de37ea26158be4d8aaef16d45" Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.544444 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rzrvh" Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.580735 4731 scope.go:117] "RemoveContainer" containerID="485aad6900a56fa973a19388f190830b6f99d8fa5b76dbdcd0ee6030af2c1bc2" Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.583520 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rzrvh"] Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.592394 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rzrvh"] Nov 25 15:30:29 crc kubenswrapper[4731]: I1125 15:30:29.618245 4731 scope.go:117] "RemoveContainer" containerID="6e184acee33759fa7c32618535cc6b2bdb0bdbc2f00eb6bd01ff0d9d9c3f23fb" Nov 25 15:30:30 crc kubenswrapper[4731]: I1125 15:30:30.822011 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="112fa3eb-aef8-4893-a054-1e6730c704eb" path="/var/lib/kubelet/pods/112fa3eb-aef8-4893-a054-1e6730c704eb/volumes" Nov 25 15:30:37 crc kubenswrapper[4731]: I1125 15:30:37.631564 4731 generic.go:334] "Generic (PLEG): container finished" podID="0b01dd33-8188-45e4-a53a-c01ea9df00f7" containerID="e72a73428fac859c375fe0b7674410597d14a28b3c4f9301fb6342445ac169fd" exitCode=0 Nov 25 15:30:37 crc kubenswrapper[4731]: I1125 15:30:37.631657 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" event={"ID":"0b01dd33-8188-45e4-a53a-c01ea9df00f7","Type":"ContainerDied","Data":"e72a73428fac859c375fe0b7674410597d14a28b3c4f9301fb6342445ac169fd"} Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.033900 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.118436 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ovncontroller-config-0\") pod \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.118727 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnshj\" (UniqueName: \"kubernetes.io/projected/0b01dd33-8188-45e4-a53a-c01ea9df00f7-kube-api-access-gnshj\") pod \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.119876 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ssh-key\") pod \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.120039 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ovn-combined-ca-bundle\") pod \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.120227 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-inventory\") pod \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\" (UID: \"0b01dd33-8188-45e4-a53a-c01ea9df00f7\") " Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.125966 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "0b01dd33-8188-45e4-a53a-c01ea9df00f7" (UID: "0b01dd33-8188-45e4-a53a-c01ea9df00f7"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.132231 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b01dd33-8188-45e4-a53a-c01ea9df00f7-kube-api-access-gnshj" (OuterVolumeSpecName: "kube-api-access-gnshj") pod "0b01dd33-8188-45e4-a53a-c01ea9df00f7" (UID: "0b01dd33-8188-45e4-a53a-c01ea9df00f7"). InnerVolumeSpecName "kube-api-access-gnshj". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.150532 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "0b01dd33-8188-45e4-a53a-c01ea9df00f7" (UID: "0b01dd33-8188-45e4-a53a-c01ea9df00f7"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.150586 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0b01dd33-8188-45e4-a53a-c01ea9df00f7" (UID: "0b01dd33-8188-45e4-a53a-c01ea9df00f7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.152728 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-inventory" (OuterVolumeSpecName: "inventory") pod "0b01dd33-8188-45e4-a53a-c01ea9df00f7" (UID: "0b01dd33-8188-45e4-a53a-c01ea9df00f7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.222499 4731 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.222540 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnshj\" (UniqueName: \"kubernetes.io/projected/0b01dd33-8188-45e4-a53a-c01ea9df00f7-kube-api-access-gnshj\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.222550 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.222558 4731 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.222572 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b01dd33-8188-45e4-a53a-c01ea9df00f7-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.654194 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" event={"ID":"0b01dd33-8188-45e4-a53a-c01ea9df00f7","Type":"ContainerDied","Data":"ca198c9c479890ba0d4ce77f108e1bfda1a6510d1a86024802856872df51f26d"} Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.654242 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca198c9c479890ba0d4ce77f108e1bfda1a6510d1a86024802856872df51f26d" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.654308 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-lks4s" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.814057 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6"] Nov 25 15:30:39 crc kubenswrapper[4731]: E1125 15:30:39.814756 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b01dd33-8188-45e4-a53a-c01ea9df00f7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.814788 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b01dd33-8188-45e4-a53a-c01ea9df00f7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 15:30:39 crc kubenswrapper[4731]: E1125 15:30:39.814833 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112fa3eb-aef8-4893-a054-1e6730c704eb" containerName="extract-utilities" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.814844 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="112fa3eb-aef8-4893-a054-1e6730c704eb" containerName="extract-utilities" Nov 25 15:30:39 crc kubenswrapper[4731]: E1125 15:30:39.814870 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112fa3eb-aef8-4893-a054-1e6730c704eb" containerName="extract-content" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.814878 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="112fa3eb-aef8-4893-a054-1e6730c704eb" containerName="extract-content" Nov 25 15:30:39 crc kubenswrapper[4731]: E1125 15:30:39.814889 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="112fa3eb-aef8-4893-a054-1e6730c704eb" containerName="registry-server" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.814900 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="112fa3eb-aef8-4893-a054-1e6730c704eb" containerName="registry-server" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.815195 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="112fa3eb-aef8-4893-a054-1e6730c704eb" containerName="registry-server" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.815217 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b01dd33-8188-45e4-a53a-c01ea9df00f7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.816342 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.819193 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.819254 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.819204 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.821417 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.821674 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.822183 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.832899 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6"] Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.944261 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.944321 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.944373 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.944563 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqqpw\" (UniqueName: \"kubernetes.io/projected/62e629fe-ccc6-4323-b226-1bc0b4591c2a-kube-api-access-kqqpw\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.944591 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:39 crc kubenswrapper[4731]: I1125 15:30:39.944625 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:40 crc kubenswrapper[4731]: I1125 15:30:40.047396 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:40 crc kubenswrapper[4731]: I1125 15:30:40.047488 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:40 crc kubenswrapper[4731]: I1125 15:30:40.047553 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:40 crc kubenswrapper[4731]: I1125 15:30:40.048529 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:40 crc kubenswrapper[4731]: I1125 15:30:40.048570 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqqpw\" (UniqueName: \"kubernetes.io/projected/62e629fe-ccc6-4323-b226-1bc0b4591c2a-kube-api-access-kqqpw\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:40 crc kubenswrapper[4731]: I1125 15:30:40.048615 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:40 crc kubenswrapper[4731]: I1125 15:30:40.052868 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:40 crc kubenswrapper[4731]: I1125 15:30:40.052892 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:40 crc kubenswrapper[4731]: I1125 15:30:40.053515 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:40 crc kubenswrapper[4731]: I1125 15:30:40.053604 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:40 crc kubenswrapper[4731]: I1125 15:30:40.053616 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:40 crc kubenswrapper[4731]: I1125 15:30:40.071974 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqqpw\" (UniqueName: \"kubernetes.io/projected/62e629fe-ccc6-4323-b226-1bc0b4591c2a-kube-api-access-kqqpw\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:40 crc kubenswrapper[4731]: I1125 15:30:40.146125 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:30:40 crc kubenswrapper[4731]: I1125 15:30:40.720260 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6"] Nov 25 15:30:41 crc kubenswrapper[4731]: I1125 15:30:41.680015 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" event={"ID":"62e629fe-ccc6-4323-b226-1bc0b4591c2a","Type":"ContainerStarted","Data":"f1fe713d143201fd5a674280dc983192420ec54911a1044821edc54f180a5887"} Nov 25 15:30:43 crc kubenswrapper[4731]: I1125 15:30:43.706772 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" event={"ID":"62e629fe-ccc6-4323-b226-1bc0b4591c2a","Type":"ContainerStarted","Data":"446d00af79c1df6f8428b0072a9411933d39df50e00fb4461888e6a26d84a4a4"} Nov 25 15:30:43 crc kubenswrapper[4731]: I1125 15:30:43.738289 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" podStartSLOduration=3.061460452 podStartE2EDuration="4.738265774s" podCreationTimestamp="2025-11-25 15:30:39 +0000 UTC" firstStartedPulling="2025-11-25 15:30:40.730490869 +0000 UTC m=+2199.477625299" lastFinishedPulling="2025-11-25 15:30:42.407296181 +0000 UTC m=+2201.154430621" observedRunningTime="2025-11-25 15:30:43.724727319 +0000 UTC m=+2202.471861749" watchObservedRunningTime="2025-11-25 15:30:43.738265774 +0000 UTC m=+2202.485400224" Nov 25 15:31:20 crc kubenswrapper[4731]: I1125 15:31:20.039740 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:31:20 crc kubenswrapper[4731]: I1125 15:31:20.040301 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:31:36 crc kubenswrapper[4731]: I1125 15:31:36.360039 4731 generic.go:334] "Generic (PLEG): container finished" podID="62e629fe-ccc6-4323-b226-1bc0b4591c2a" containerID="446d00af79c1df6f8428b0072a9411933d39df50e00fb4461888e6a26d84a4a4" exitCode=0 Nov 25 15:31:36 crc kubenswrapper[4731]: I1125 15:31:36.360164 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" event={"ID":"62e629fe-ccc6-4323-b226-1bc0b4591c2a","Type":"ContainerDied","Data":"446d00af79c1df6f8428b0072a9411933d39df50e00fb4461888e6a26d84a4a4"} Nov 25 15:31:37 crc kubenswrapper[4731]: I1125 15:31:37.886856 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.045289 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-neutron-metadata-combined-ca-bundle\") pod \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.045347 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-neutron-ovn-metadata-agent-neutron-config-0\") pod \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.045520 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqqpw\" (UniqueName: \"kubernetes.io/projected/62e629fe-ccc6-4323-b226-1bc0b4591c2a-kube-api-access-kqqpw\") pod \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.045615 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-ssh-key\") pod \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.045722 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-inventory\") pod \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.045751 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-nova-metadata-neutron-config-0\") pod \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\" (UID: \"62e629fe-ccc6-4323-b226-1bc0b4591c2a\") " Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.053125 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62e629fe-ccc6-4323-b226-1bc0b4591c2a-kube-api-access-kqqpw" (OuterVolumeSpecName: "kube-api-access-kqqpw") pod "62e629fe-ccc6-4323-b226-1bc0b4591c2a" (UID: "62e629fe-ccc6-4323-b226-1bc0b4591c2a"). InnerVolumeSpecName "kube-api-access-kqqpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.054356 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "62e629fe-ccc6-4323-b226-1bc0b4591c2a" (UID: "62e629fe-ccc6-4323-b226-1bc0b4591c2a"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.079096 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-inventory" (OuterVolumeSpecName: "inventory") pod "62e629fe-ccc6-4323-b226-1bc0b4591c2a" (UID: "62e629fe-ccc6-4323-b226-1bc0b4591c2a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.079182 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "62e629fe-ccc6-4323-b226-1bc0b4591c2a" (UID: "62e629fe-ccc6-4323-b226-1bc0b4591c2a"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.082395 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "62e629fe-ccc6-4323-b226-1bc0b4591c2a" (UID: "62e629fe-ccc6-4323-b226-1bc0b4591c2a"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.090233 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "62e629fe-ccc6-4323-b226-1bc0b4591c2a" (UID: "62e629fe-ccc6-4323-b226-1bc0b4591c2a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.147664 4731 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.147700 4731 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.147711 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqqpw\" (UniqueName: \"kubernetes.io/projected/62e629fe-ccc6-4323-b226-1bc0b4591c2a-kube-api-access-kqqpw\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.147720 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.147728 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.147736 4731 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/62e629fe-ccc6-4323-b226-1bc0b4591c2a-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.381935 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" event={"ID":"62e629fe-ccc6-4323-b226-1bc0b4591c2a","Type":"ContainerDied","Data":"f1fe713d143201fd5a674280dc983192420ec54911a1044821edc54f180a5887"} Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.382000 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1fe713d143201fd5a674280dc983192420ec54911a1044821edc54f180a5887" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.382007 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.484860 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr"] Nov 25 15:31:38 crc kubenswrapper[4731]: E1125 15:31:38.486409 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e629fe-ccc6-4323-b226-1bc0b4591c2a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.486436 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e629fe-ccc6-4323-b226-1bc0b4591c2a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.486712 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="62e629fe-ccc6-4323-b226-1bc0b4591c2a" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.487798 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.490591 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.490886 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.492801 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.492875 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.493010 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.527359 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr"] Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.657511 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxthr\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.657635 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxthr\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.657702 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxthr\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.657801 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8jm7\" (UniqueName: \"kubernetes.io/projected/90c7443c-b0ed-4432-8a15-f1d26a2161e9-kube-api-access-b8jm7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxthr\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.658178 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxthr\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.760542 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxthr\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.761013 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8jm7\" (UniqueName: \"kubernetes.io/projected/90c7443c-b0ed-4432-8a15-f1d26a2161e9-kube-api-access-b8jm7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxthr\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.761352 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxthr\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.761699 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxthr\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.762176 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxthr\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.766522 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxthr\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.767378 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxthr\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.769012 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxthr\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.774660 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxthr\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.782374 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8jm7\" (UniqueName: \"kubernetes.io/projected/90c7443c-b0ed-4432-8a15-f1d26a2161e9-kube-api-access-b8jm7\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gxthr\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:38 crc kubenswrapper[4731]: I1125 15:31:38.844639 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:31:39 crc kubenswrapper[4731]: I1125 15:31:39.420433 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr"] Nov 25 15:31:40 crc kubenswrapper[4731]: I1125 15:31:40.408831 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" event={"ID":"90c7443c-b0ed-4432-8a15-f1d26a2161e9","Type":"ContainerStarted","Data":"03e113e2d800f01ebd37e6ade026fda59b92b01e40a12be38c2c620ea88179ed"} Nov 25 15:31:44 crc kubenswrapper[4731]: I1125 15:31:44.449481 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" event={"ID":"90c7443c-b0ed-4432-8a15-f1d26a2161e9","Type":"ContainerStarted","Data":"9bacb1dbbee9cd4dd46bc3e8e2706935d07b6056da1fdd4fc6b7f6599e1fe3ef"} Nov 25 15:31:50 crc kubenswrapper[4731]: I1125 15:31:50.039624 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:31:50 crc kubenswrapper[4731]: I1125 15:31:50.040352 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:32:20 crc kubenswrapper[4731]: I1125 15:32:20.039604 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:32:20 crc kubenswrapper[4731]: I1125 15:32:20.040138 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:32:20 crc kubenswrapper[4731]: I1125 15:32:20.040183 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 15:32:20 crc kubenswrapper[4731]: I1125 15:32:20.040915 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:32:20 crc kubenswrapper[4731]: I1125 15:32:20.040975 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" gracePeriod=600 Nov 25 15:32:20 crc kubenswrapper[4731]: E1125 15:32:20.915906 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:32:20 crc kubenswrapper[4731]: I1125 15:32:20.949898 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" exitCode=0 Nov 25 15:32:20 crc kubenswrapper[4731]: I1125 15:32:20.949941 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc"} Nov 25 15:32:20 crc kubenswrapper[4731]: I1125 15:32:20.949974 4731 scope.go:117] "RemoveContainer" containerID="b1fa04dd595babcd00ffd658c2b29e3b04674535b213d4a578d43ab2b176f067" Nov 25 15:32:20 crc kubenswrapper[4731]: I1125 15:32:20.950690 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:32:20 crc kubenswrapper[4731]: E1125 15:32:20.950967 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:32:20 crc kubenswrapper[4731]: I1125 15:32:20.972190 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" podStartSLOduration=38.994747189 podStartE2EDuration="42.972166244s" podCreationTimestamp="2025-11-25 15:31:38 +0000 UTC" firstStartedPulling="2025-11-25 15:31:39.42890753 +0000 UTC m=+2258.176041950" lastFinishedPulling="2025-11-25 15:31:43.406326575 +0000 UTC m=+2262.153461005" observedRunningTime="2025-11-25 15:31:44.479288699 +0000 UTC m=+2263.226423139" watchObservedRunningTime="2025-11-25 15:32:20.972166244 +0000 UTC m=+2299.719300674" Nov 25 15:32:33 crc kubenswrapper[4731]: I1125 15:32:33.811708 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:32:33 crc kubenswrapper[4731]: E1125 15:32:33.812539 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:32:46 crc kubenswrapper[4731]: I1125 15:32:46.811897 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:32:46 crc kubenswrapper[4731]: E1125 15:32:46.812798 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:33:01 crc kubenswrapper[4731]: I1125 15:33:01.811942 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:33:01 crc kubenswrapper[4731]: E1125 15:33:01.812713 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:33:15 crc kubenswrapper[4731]: I1125 15:33:15.812088 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:33:15 crc kubenswrapper[4731]: E1125 15:33:15.812985 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:33:28 crc kubenswrapper[4731]: I1125 15:33:28.812125 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:33:28 crc kubenswrapper[4731]: E1125 15:33:28.812992 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:33:41 crc kubenswrapper[4731]: I1125 15:33:41.811856 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:33:41 crc kubenswrapper[4731]: E1125 15:33:41.813077 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:33:53 crc kubenswrapper[4731]: I1125 15:33:53.811493 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:33:53 crc kubenswrapper[4731]: E1125 15:33:53.812388 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:34:06 crc kubenswrapper[4731]: I1125 15:34:06.811528 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:34:06 crc kubenswrapper[4731]: E1125 15:34:06.812453 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:34:20 crc kubenswrapper[4731]: I1125 15:34:20.812073 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:34:20 crc kubenswrapper[4731]: E1125 15:34:20.813532 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:34:32 crc kubenswrapper[4731]: I1125 15:34:32.820249 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:34:32 crc kubenswrapper[4731]: E1125 15:34:32.821338 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:34:46 crc kubenswrapper[4731]: I1125 15:34:46.811610 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:34:46 crc kubenswrapper[4731]: E1125 15:34:46.813271 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:35:00 crc kubenswrapper[4731]: I1125 15:35:00.812294 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:35:00 crc kubenswrapper[4731]: E1125 15:35:00.813258 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:35:15 crc kubenswrapper[4731]: I1125 15:35:15.811506 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:35:15 crc kubenswrapper[4731]: E1125 15:35:15.812427 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:35:15 crc kubenswrapper[4731]: I1125 15:35:15.842998 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lxlc9"] Nov 25 15:35:15 crc kubenswrapper[4731]: I1125 15:35:15.851405 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:15 crc kubenswrapper[4731]: I1125 15:35:15.853116 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxlc9"] Nov 25 15:35:15 crc kubenswrapper[4731]: I1125 15:35:15.899448 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100b0088-57e9-4324-b2a1-4011542381ff-catalog-content\") pod \"redhat-marketplace-lxlc9\" (UID: \"100b0088-57e9-4324-b2a1-4011542381ff\") " pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:15 crc kubenswrapper[4731]: I1125 15:35:15.899801 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85scl\" (UniqueName: \"kubernetes.io/projected/100b0088-57e9-4324-b2a1-4011542381ff-kube-api-access-85scl\") pod \"redhat-marketplace-lxlc9\" (UID: \"100b0088-57e9-4324-b2a1-4011542381ff\") " pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:15 crc kubenswrapper[4731]: I1125 15:35:15.900123 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100b0088-57e9-4324-b2a1-4011542381ff-utilities\") pod \"redhat-marketplace-lxlc9\" (UID: \"100b0088-57e9-4324-b2a1-4011542381ff\") " pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:16 crc kubenswrapper[4731]: I1125 15:35:16.001487 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100b0088-57e9-4324-b2a1-4011542381ff-utilities\") pod \"redhat-marketplace-lxlc9\" (UID: \"100b0088-57e9-4324-b2a1-4011542381ff\") " pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:16 crc kubenswrapper[4731]: I1125 15:35:16.001594 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100b0088-57e9-4324-b2a1-4011542381ff-catalog-content\") pod \"redhat-marketplace-lxlc9\" (UID: \"100b0088-57e9-4324-b2a1-4011542381ff\") " pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:16 crc kubenswrapper[4731]: I1125 15:35:16.002045 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100b0088-57e9-4324-b2a1-4011542381ff-utilities\") pod \"redhat-marketplace-lxlc9\" (UID: \"100b0088-57e9-4324-b2a1-4011542381ff\") " pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:16 crc kubenswrapper[4731]: I1125 15:35:16.002107 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100b0088-57e9-4324-b2a1-4011542381ff-catalog-content\") pod \"redhat-marketplace-lxlc9\" (UID: \"100b0088-57e9-4324-b2a1-4011542381ff\") " pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:16 crc kubenswrapper[4731]: I1125 15:35:16.002264 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85scl\" (UniqueName: \"kubernetes.io/projected/100b0088-57e9-4324-b2a1-4011542381ff-kube-api-access-85scl\") pod \"redhat-marketplace-lxlc9\" (UID: \"100b0088-57e9-4324-b2a1-4011542381ff\") " pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:16 crc kubenswrapper[4731]: I1125 15:35:16.023246 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85scl\" (UniqueName: \"kubernetes.io/projected/100b0088-57e9-4324-b2a1-4011542381ff-kube-api-access-85scl\") pod \"redhat-marketplace-lxlc9\" (UID: \"100b0088-57e9-4324-b2a1-4011542381ff\") " pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:16 crc kubenswrapper[4731]: I1125 15:35:16.173159 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:16 crc kubenswrapper[4731]: I1125 15:35:16.693535 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxlc9"] Nov 25 15:35:17 crc kubenswrapper[4731]: I1125 15:35:17.579014 4731 generic.go:334] "Generic (PLEG): container finished" podID="100b0088-57e9-4324-b2a1-4011542381ff" containerID="f2e474c3bb15f2b5473256a0a46df4758689ddb0868d5f29c62276690c30833d" exitCode=0 Nov 25 15:35:17 crc kubenswrapper[4731]: I1125 15:35:17.579117 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxlc9" event={"ID":"100b0088-57e9-4324-b2a1-4011542381ff","Type":"ContainerDied","Data":"f2e474c3bb15f2b5473256a0a46df4758689ddb0868d5f29c62276690c30833d"} Nov 25 15:35:17 crc kubenswrapper[4731]: I1125 15:35:17.579282 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxlc9" event={"ID":"100b0088-57e9-4324-b2a1-4011542381ff","Type":"ContainerStarted","Data":"7fb36d1000c61bd17bda6bc73e0ee15e56ac8cf961b57dadaf81c10502657a32"} Nov 25 15:35:17 crc kubenswrapper[4731]: I1125 15:35:17.580985 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:35:19 crc kubenswrapper[4731]: I1125 15:35:19.605312 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxlc9" event={"ID":"100b0088-57e9-4324-b2a1-4011542381ff","Type":"ContainerStarted","Data":"9de8f0e50e5e01271cebaa339c49c0f6f1f641f047f3bc3e44c2ee933ed610f7"} Nov 25 15:35:20 crc kubenswrapper[4731]: I1125 15:35:20.619325 4731 generic.go:334] "Generic (PLEG): container finished" podID="100b0088-57e9-4324-b2a1-4011542381ff" containerID="9de8f0e50e5e01271cebaa339c49c0f6f1f641f047f3bc3e44c2ee933ed610f7" exitCode=0 Nov 25 15:35:20 crc kubenswrapper[4731]: I1125 15:35:20.619462 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxlc9" event={"ID":"100b0088-57e9-4324-b2a1-4011542381ff","Type":"ContainerDied","Data":"9de8f0e50e5e01271cebaa339c49c0f6f1f641f047f3bc3e44c2ee933ed610f7"} Nov 25 15:35:22 crc kubenswrapper[4731]: I1125 15:35:22.646562 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxlc9" event={"ID":"100b0088-57e9-4324-b2a1-4011542381ff","Type":"ContainerStarted","Data":"3334a63b54dcbd0ec1fcc594376e905c81026927cd53cfdb72aa8db7b030b594"} Nov 25 15:35:22 crc kubenswrapper[4731]: I1125 15:35:22.677662 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lxlc9" podStartSLOduration=3.710552139 podStartE2EDuration="7.677642846s" podCreationTimestamp="2025-11-25 15:35:15 +0000 UTC" firstStartedPulling="2025-11-25 15:35:17.580561454 +0000 UTC m=+2476.327695884" lastFinishedPulling="2025-11-25 15:35:21.547652161 +0000 UTC m=+2480.294786591" observedRunningTime="2025-11-25 15:35:22.666991586 +0000 UTC m=+2481.414126016" watchObservedRunningTime="2025-11-25 15:35:22.677642846 +0000 UTC m=+2481.424777276" Nov 25 15:35:26 crc kubenswrapper[4731]: I1125 15:35:26.174226 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:26 crc kubenswrapper[4731]: I1125 15:35:26.174883 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:26 crc kubenswrapper[4731]: I1125 15:35:26.226532 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:26 crc kubenswrapper[4731]: I1125 15:35:26.731207 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:26 crc kubenswrapper[4731]: I1125 15:35:26.794462 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxlc9"] Nov 25 15:35:26 crc kubenswrapper[4731]: I1125 15:35:26.811401 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:35:26 crc kubenswrapper[4731]: E1125 15:35:26.811676 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:35:28 crc kubenswrapper[4731]: I1125 15:35:28.732539 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lxlc9" podUID="100b0088-57e9-4324-b2a1-4011542381ff" containerName="registry-server" containerID="cri-o://3334a63b54dcbd0ec1fcc594376e905c81026927cd53cfdb72aa8db7b030b594" gracePeriod=2 Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.215169 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.379444 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100b0088-57e9-4324-b2a1-4011542381ff-catalog-content\") pod \"100b0088-57e9-4324-b2a1-4011542381ff\" (UID: \"100b0088-57e9-4324-b2a1-4011542381ff\") " Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.379571 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100b0088-57e9-4324-b2a1-4011542381ff-utilities\") pod \"100b0088-57e9-4324-b2a1-4011542381ff\" (UID: \"100b0088-57e9-4324-b2a1-4011542381ff\") " Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.379670 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85scl\" (UniqueName: \"kubernetes.io/projected/100b0088-57e9-4324-b2a1-4011542381ff-kube-api-access-85scl\") pod \"100b0088-57e9-4324-b2a1-4011542381ff\" (UID: \"100b0088-57e9-4324-b2a1-4011542381ff\") " Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.380522 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/100b0088-57e9-4324-b2a1-4011542381ff-utilities" (OuterVolumeSpecName: "utilities") pod "100b0088-57e9-4324-b2a1-4011542381ff" (UID: "100b0088-57e9-4324-b2a1-4011542381ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.390180 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100b0088-57e9-4324-b2a1-4011542381ff-kube-api-access-85scl" (OuterVolumeSpecName: "kube-api-access-85scl") pod "100b0088-57e9-4324-b2a1-4011542381ff" (UID: "100b0088-57e9-4324-b2a1-4011542381ff"). InnerVolumeSpecName "kube-api-access-85scl". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.399623 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/100b0088-57e9-4324-b2a1-4011542381ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "100b0088-57e9-4324-b2a1-4011542381ff" (UID: "100b0088-57e9-4324-b2a1-4011542381ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.482631 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85scl\" (UniqueName: \"kubernetes.io/projected/100b0088-57e9-4324-b2a1-4011542381ff-kube-api-access-85scl\") on node \"crc\" DevicePath \"\"" Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.482675 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/100b0088-57e9-4324-b2a1-4011542381ff-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.482687 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/100b0088-57e9-4324-b2a1-4011542381ff-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.744584 4731 generic.go:334] "Generic (PLEG): container finished" podID="100b0088-57e9-4324-b2a1-4011542381ff" containerID="3334a63b54dcbd0ec1fcc594376e905c81026927cd53cfdb72aa8db7b030b594" exitCode=0 Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.744626 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxlc9" Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.744641 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxlc9" event={"ID":"100b0088-57e9-4324-b2a1-4011542381ff","Type":"ContainerDied","Data":"3334a63b54dcbd0ec1fcc594376e905c81026927cd53cfdb72aa8db7b030b594"} Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.744686 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxlc9" event={"ID":"100b0088-57e9-4324-b2a1-4011542381ff","Type":"ContainerDied","Data":"7fb36d1000c61bd17bda6bc73e0ee15e56ac8cf961b57dadaf81c10502657a32"} Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.744703 4731 scope.go:117] "RemoveContainer" containerID="3334a63b54dcbd0ec1fcc594376e905c81026927cd53cfdb72aa8db7b030b594" Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.790361 4731 scope.go:117] "RemoveContainer" containerID="9de8f0e50e5e01271cebaa339c49c0f6f1f641f047f3bc3e44c2ee933ed610f7" Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.794697 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxlc9"] Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.804843 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxlc9"] Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.812596 4731 scope.go:117] "RemoveContainer" containerID="f2e474c3bb15f2b5473256a0a46df4758689ddb0868d5f29c62276690c30833d" Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.857680 4731 scope.go:117] "RemoveContainer" containerID="3334a63b54dcbd0ec1fcc594376e905c81026927cd53cfdb72aa8db7b030b594" Nov 25 15:35:29 crc kubenswrapper[4731]: E1125 15:35:29.858355 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3334a63b54dcbd0ec1fcc594376e905c81026927cd53cfdb72aa8db7b030b594\": container with ID starting with 3334a63b54dcbd0ec1fcc594376e905c81026927cd53cfdb72aa8db7b030b594 not found: ID does not exist" containerID="3334a63b54dcbd0ec1fcc594376e905c81026927cd53cfdb72aa8db7b030b594" Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.858424 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3334a63b54dcbd0ec1fcc594376e905c81026927cd53cfdb72aa8db7b030b594"} err="failed to get container status \"3334a63b54dcbd0ec1fcc594376e905c81026927cd53cfdb72aa8db7b030b594\": rpc error: code = NotFound desc = could not find container \"3334a63b54dcbd0ec1fcc594376e905c81026927cd53cfdb72aa8db7b030b594\": container with ID starting with 3334a63b54dcbd0ec1fcc594376e905c81026927cd53cfdb72aa8db7b030b594 not found: ID does not exist" Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.858461 4731 scope.go:117] "RemoveContainer" containerID="9de8f0e50e5e01271cebaa339c49c0f6f1f641f047f3bc3e44c2ee933ed610f7" Nov 25 15:35:29 crc kubenswrapper[4731]: E1125 15:35:29.859837 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de8f0e50e5e01271cebaa339c49c0f6f1f641f047f3bc3e44c2ee933ed610f7\": container with ID starting with 9de8f0e50e5e01271cebaa339c49c0f6f1f641f047f3bc3e44c2ee933ed610f7 not found: ID does not exist" containerID="9de8f0e50e5e01271cebaa339c49c0f6f1f641f047f3bc3e44c2ee933ed610f7" Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.859909 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de8f0e50e5e01271cebaa339c49c0f6f1f641f047f3bc3e44c2ee933ed610f7"} err="failed to get container status \"9de8f0e50e5e01271cebaa339c49c0f6f1f641f047f3bc3e44c2ee933ed610f7\": rpc error: code = NotFound desc = could not find container \"9de8f0e50e5e01271cebaa339c49c0f6f1f641f047f3bc3e44c2ee933ed610f7\": container with ID starting with 9de8f0e50e5e01271cebaa339c49c0f6f1f641f047f3bc3e44c2ee933ed610f7 not found: ID does not exist" Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.859954 4731 scope.go:117] "RemoveContainer" containerID="f2e474c3bb15f2b5473256a0a46df4758689ddb0868d5f29c62276690c30833d" Nov 25 15:35:29 crc kubenswrapper[4731]: E1125 15:35:29.860308 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2e474c3bb15f2b5473256a0a46df4758689ddb0868d5f29c62276690c30833d\": container with ID starting with f2e474c3bb15f2b5473256a0a46df4758689ddb0868d5f29c62276690c30833d not found: ID does not exist" containerID="f2e474c3bb15f2b5473256a0a46df4758689ddb0868d5f29c62276690c30833d" Nov 25 15:35:29 crc kubenswrapper[4731]: I1125 15:35:29.860353 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2e474c3bb15f2b5473256a0a46df4758689ddb0868d5f29c62276690c30833d"} err="failed to get container status \"f2e474c3bb15f2b5473256a0a46df4758689ddb0868d5f29c62276690c30833d\": rpc error: code = NotFound desc = could not find container \"f2e474c3bb15f2b5473256a0a46df4758689ddb0868d5f29c62276690c30833d\": container with ID starting with f2e474c3bb15f2b5473256a0a46df4758689ddb0868d5f29c62276690c30833d not found: ID does not exist" Nov 25 15:35:30 crc kubenswrapper[4731]: I1125 15:35:30.824190 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="100b0088-57e9-4324-b2a1-4011542381ff" path="/var/lib/kubelet/pods/100b0088-57e9-4324-b2a1-4011542381ff/volumes" Nov 25 15:35:37 crc kubenswrapper[4731]: I1125 15:35:37.812190 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:35:37 crc kubenswrapper[4731]: E1125 15:35:37.815304 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:35:51 crc kubenswrapper[4731]: I1125 15:35:51.812473 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:35:51 crc kubenswrapper[4731]: E1125 15:35:51.813544 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:36:05 crc kubenswrapper[4731]: I1125 15:36:05.812611 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:36:05 crc kubenswrapper[4731]: E1125 15:36:05.813382 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:36:16 crc kubenswrapper[4731]: I1125 15:36:16.811287 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:36:16 crc kubenswrapper[4731]: E1125 15:36:16.812194 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:36:27 crc kubenswrapper[4731]: E1125 15:36:27.210286 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90c7443c_b0ed_4432_8a15_f1d26a2161e9.slice/crio-9bacb1dbbee9cd4dd46bc3e8e2706935d07b6056da1fdd4fc6b7f6599e1fe3ef.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90c7443c_b0ed_4432_8a15_f1d26a2161e9.slice/crio-conmon-9bacb1dbbee9cd4dd46bc3e8e2706935d07b6056da1fdd4fc6b7f6599e1fe3ef.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:36:27 crc kubenswrapper[4731]: I1125 15:36:27.332551 4731 generic.go:334] "Generic (PLEG): container finished" podID="90c7443c-b0ed-4432-8a15-f1d26a2161e9" containerID="9bacb1dbbee9cd4dd46bc3e8e2706935d07b6056da1fdd4fc6b7f6599e1fe3ef" exitCode=0 Nov 25 15:36:27 crc kubenswrapper[4731]: I1125 15:36:27.332597 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" event={"ID":"90c7443c-b0ed-4432-8a15-f1d26a2161e9","Type":"ContainerDied","Data":"9bacb1dbbee9cd4dd46bc3e8e2706935d07b6056da1fdd4fc6b7f6599e1fe3ef"} Nov 25 15:36:27 crc kubenswrapper[4731]: I1125 15:36:27.811741 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:36:27 crc kubenswrapper[4731]: E1125 15:36:27.812331 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:36:28 crc kubenswrapper[4731]: I1125 15:36:28.807794 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:36:28 crc kubenswrapper[4731]: I1125 15:36:28.982147 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-libvirt-combined-ca-bundle\") pod \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " Nov 25 15:36:28 crc kubenswrapper[4731]: I1125 15:36:28.982584 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-inventory\") pod \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " Nov 25 15:36:28 crc kubenswrapper[4731]: I1125 15:36:28.982708 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8jm7\" (UniqueName: \"kubernetes.io/projected/90c7443c-b0ed-4432-8a15-f1d26a2161e9-kube-api-access-b8jm7\") pod \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " Nov 25 15:36:28 crc kubenswrapper[4731]: I1125 15:36:28.982947 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-ssh-key\") pod \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " Nov 25 15:36:28 crc kubenswrapper[4731]: I1125 15:36:28.983072 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-libvirt-secret-0\") pod \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\" (UID: \"90c7443c-b0ed-4432-8a15-f1d26a2161e9\") " Nov 25 15:36:28 crc kubenswrapper[4731]: I1125 15:36:28.988432 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "90c7443c-b0ed-4432-8a15-f1d26a2161e9" (UID: "90c7443c-b0ed-4432-8a15-f1d26a2161e9"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:36:28 crc kubenswrapper[4731]: I1125 15:36:28.997087 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90c7443c-b0ed-4432-8a15-f1d26a2161e9-kube-api-access-b8jm7" (OuterVolumeSpecName: "kube-api-access-b8jm7") pod "90c7443c-b0ed-4432-8a15-f1d26a2161e9" (UID: "90c7443c-b0ed-4432-8a15-f1d26a2161e9"). InnerVolumeSpecName "kube-api-access-b8jm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.013943 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "90c7443c-b0ed-4432-8a15-f1d26a2161e9" (UID: "90c7443c-b0ed-4432-8a15-f1d26a2161e9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.014762 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-inventory" (OuterVolumeSpecName: "inventory") pod "90c7443c-b0ed-4432-8a15-f1d26a2161e9" (UID: "90c7443c-b0ed-4432-8a15-f1d26a2161e9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.016191 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "90c7443c-b0ed-4432-8a15-f1d26a2161e9" (UID: "90c7443c-b0ed-4432-8a15-f1d26a2161e9"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.085246 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.085275 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8jm7\" (UniqueName: \"kubernetes.io/projected/90c7443c-b0ed-4432-8a15-f1d26a2161e9-kube-api-access-b8jm7\") on node \"crc\" DevicePath \"\"" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.085286 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.085296 4731 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.085305 4731 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90c7443c-b0ed-4432-8a15-f1d26a2161e9-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.353962 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" event={"ID":"90c7443c-b0ed-4432-8a15-f1d26a2161e9","Type":"ContainerDied","Data":"03e113e2d800f01ebd37e6ade026fda59b92b01e40a12be38c2c620ea88179ed"} Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.354295 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03e113e2d800f01ebd37e6ade026fda59b92b01e40a12be38c2c620ea88179ed" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.354012 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gxthr" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.473162 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8"] Nov 25 15:36:29 crc kubenswrapper[4731]: E1125 15:36:29.473582 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90c7443c-b0ed-4432-8a15-f1d26a2161e9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.473605 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="90c7443c-b0ed-4432-8a15-f1d26a2161e9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 15:36:29 crc kubenswrapper[4731]: E1125 15:36:29.473633 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100b0088-57e9-4324-b2a1-4011542381ff" containerName="extract-content" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.473641 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="100b0088-57e9-4324-b2a1-4011542381ff" containerName="extract-content" Nov 25 15:36:29 crc kubenswrapper[4731]: E1125 15:36:29.473662 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100b0088-57e9-4324-b2a1-4011542381ff" containerName="extract-utilities" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.473668 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="100b0088-57e9-4324-b2a1-4011542381ff" containerName="extract-utilities" Nov 25 15:36:29 crc kubenswrapper[4731]: E1125 15:36:29.473681 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100b0088-57e9-4324-b2a1-4011542381ff" containerName="registry-server" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.473686 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="100b0088-57e9-4324-b2a1-4011542381ff" containerName="registry-server" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.473893 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="100b0088-57e9-4324-b2a1-4011542381ff" containerName="registry-server" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.473921 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="90c7443c-b0ed-4432-8a15-f1d26a2161e9" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.474553 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.479330 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.479690 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.479721 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.480766 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.481089 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.481103 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.481334 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.487010 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8"] Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.594168 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.594322 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.594371 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.594419 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zkj6\" (UniqueName: \"kubernetes.io/projected/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-kube-api-access-4zkj6\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.594456 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.594506 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.594532 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.594562 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.594589 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.696137 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zkj6\" (UniqueName: \"kubernetes.io/projected/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-kube-api-access-4zkj6\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.696507 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.696624 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.696743 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.696884 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.696999 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.697216 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.697443 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.697564 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.697799 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.702388 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.703011 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.703099 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.703443 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.713549 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.713546 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.715617 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.719208 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zkj6\" (UniqueName: \"kubernetes.io/projected/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-kube-api-access-4zkj6\") pod \"nova-edpm-deployment-openstack-edpm-ipam-9jzw8\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:29 crc kubenswrapper[4731]: I1125 15:36:29.799463 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:36:30 crc kubenswrapper[4731]: I1125 15:36:30.307117 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8"] Nov 25 15:36:30 crc kubenswrapper[4731]: I1125 15:36:30.362802 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" event={"ID":"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35","Type":"ContainerStarted","Data":"41ec32f8eae68b198232684e9b765548d03bac41e74407c546bf5f1216a3d8d5"} Nov 25 15:36:31 crc kubenswrapper[4731]: I1125 15:36:31.372909 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" event={"ID":"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35","Type":"ContainerStarted","Data":"3f41381815a08fe681c53f08fe8e17343a15a9fbb33823f81fd70e3561a8a1a6"} Nov 25 15:36:31 crc kubenswrapper[4731]: I1125 15:36:31.419253 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" podStartSLOduration=1.917570132 podStartE2EDuration="2.419227908s" podCreationTimestamp="2025-11-25 15:36:29 +0000 UTC" firstStartedPulling="2025-11-25 15:36:30.311138265 +0000 UTC m=+2549.058272695" lastFinishedPulling="2025-11-25 15:36:30.812796031 +0000 UTC m=+2549.559930471" observedRunningTime="2025-11-25 15:36:31.401135637 +0000 UTC m=+2550.148270057" watchObservedRunningTime="2025-11-25 15:36:31.419227908 +0000 UTC m=+2550.166362338" Nov 25 15:36:39 crc kubenswrapper[4731]: I1125 15:36:39.811483 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:36:39 crc kubenswrapper[4731]: E1125 15:36:39.812248 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:36:50 crc kubenswrapper[4731]: I1125 15:36:50.813547 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:36:50 crc kubenswrapper[4731]: E1125 15:36:50.814389 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:37:05 crc kubenswrapper[4731]: I1125 15:37:05.766025 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-7d695c9b56-hxwn7" podUID="f5180254-3122-4550-b259-f870f4575073" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.72:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:37:05 crc kubenswrapper[4731]: I1125 15:37:05.812474 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:37:05 crc kubenswrapper[4731]: E1125 15:37:05.812738 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:37:19 crc kubenswrapper[4731]: I1125 15:37:19.811673 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:37:19 crc kubenswrapper[4731]: E1125 15:37:19.812674 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:37:30 crc kubenswrapper[4731]: I1125 15:37:30.812148 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:37:31 crc kubenswrapper[4731]: I1125 15:37:31.917700 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"a06ff1372683d0ca6f9fb8f4465bf8dffb031a0b1135a7781aae010c58616a04"} Nov 25 15:39:00 crc kubenswrapper[4731]: I1125 15:39:00.863686 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mngwl"] Nov 25 15:39:00 crc kubenswrapper[4731]: I1125 15:39:00.870245 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:00 crc kubenswrapper[4731]: I1125 15:39:00.880569 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mngwl"] Nov 25 15:39:00 crc kubenswrapper[4731]: I1125 15:39:00.984746 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e84641-1173-4f99-a3bd-6419a88d7882-catalog-content\") pod \"redhat-operators-mngwl\" (UID: \"c8e84641-1173-4f99-a3bd-6419a88d7882\") " pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:00 crc kubenswrapper[4731]: I1125 15:39:00.984870 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e84641-1173-4f99-a3bd-6419a88d7882-utilities\") pod \"redhat-operators-mngwl\" (UID: \"c8e84641-1173-4f99-a3bd-6419a88d7882\") " pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:00 crc kubenswrapper[4731]: I1125 15:39:00.984965 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh8dw\" (UniqueName: \"kubernetes.io/projected/c8e84641-1173-4f99-a3bd-6419a88d7882-kube-api-access-xh8dw\") pod \"redhat-operators-mngwl\" (UID: \"c8e84641-1173-4f99-a3bd-6419a88d7882\") " pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:01 crc kubenswrapper[4731]: I1125 15:39:01.086733 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh8dw\" (UniqueName: \"kubernetes.io/projected/c8e84641-1173-4f99-a3bd-6419a88d7882-kube-api-access-xh8dw\") pod \"redhat-operators-mngwl\" (UID: \"c8e84641-1173-4f99-a3bd-6419a88d7882\") " pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:01 crc kubenswrapper[4731]: I1125 15:39:01.087089 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e84641-1173-4f99-a3bd-6419a88d7882-catalog-content\") pod \"redhat-operators-mngwl\" (UID: \"c8e84641-1173-4f99-a3bd-6419a88d7882\") " pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:01 crc kubenswrapper[4731]: I1125 15:39:01.087160 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e84641-1173-4f99-a3bd-6419a88d7882-utilities\") pod \"redhat-operators-mngwl\" (UID: \"c8e84641-1173-4f99-a3bd-6419a88d7882\") " pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:01 crc kubenswrapper[4731]: I1125 15:39:01.087616 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e84641-1173-4f99-a3bd-6419a88d7882-catalog-content\") pod \"redhat-operators-mngwl\" (UID: \"c8e84641-1173-4f99-a3bd-6419a88d7882\") " pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:01 crc kubenswrapper[4731]: I1125 15:39:01.087629 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e84641-1173-4f99-a3bd-6419a88d7882-utilities\") pod \"redhat-operators-mngwl\" (UID: \"c8e84641-1173-4f99-a3bd-6419a88d7882\") " pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:01 crc kubenswrapper[4731]: I1125 15:39:01.126969 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh8dw\" (UniqueName: \"kubernetes.io/projected/c8e84641-1173-4f99-a3bd-6419a88d7882-kube-api-access-xh8dw\") pod \"redhat-operators-mngwl\" (UID: \"c8e84641-1173-4f99-a3bd-6419a88d7882\") " pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:01 crc kubenswrapper[4731]: I1125 15:39:01.234300 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:01 crc kubenswrapper[4731]: I1125 15:39:01.808315 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mngwl"] Nov 25 15:39:02 crc kubenswrapper[4731]: I1125 15:39:02.725008 4731 generic.go:334] "Generic (PLEG): container finished" podID="c8e84641-1173-4f99-a3bd-6419a88d7882" containerID="8d240f1c6f60f02b790076250962ffdea3ececcabeb4200ae2adbba5399ea786" exitCode=0 Nov 25 15:39:02 crc kubenswrapper[4731]: I1125 15:39:02.725326 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mngwl" event={"ID":"c8e84641-1173-4f99-a3bd-6419a88d7882","Type":"ContainerDied","Data":"8d240f1c6f60f02b790076250962ffdea3ececcabeb4200ae2adbba5399ea786"} Nov 25 15:39:02 crc kubenswrapper[4731]: I1125 15:39:02.725360 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mngwl" event={"ID":"c8e84641-1173-4f99-a3bd-6419a88d7882","Type":"ContainerStarted","Data":"34b1e7de62a51b12247baa56c15e80345e2f725e7fe79e1d1fece362f33caf48"} Nov 25 15:39:03 crc kubenswrapper[4731]: I1125 15:39:03.737135 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mngwl" event={"ID":"c8e84641-1173-4f99-a3bd-6419a88d7882","Type":"ContainerStarted","Data":"c6057f27bc659e2a31d56516a7d09bf5dbc038c08402ef7ef072354fbbee6fad"} Nov 25 15:39:07 crc kubenswrapper[4731]: I1125 15:39:07.617884 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vfl8n"] Nov 25 15:39:07 crc kubenswrapper[4731]: I1125 15:39:07.620515 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:07 crc kubenswrapper[4731]: I1125 15:39:07.630050 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vfl8n"] Nov 25 15:39:07 crc kubenswrapper[4731]: I1125 15:39:07.740867 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42qrb\" (UniqueName: \"kubernetes.io/projected/89759ef8-3b0b-4c78-85e0-c34963a08a21-kube-api-access-42qrb\") pod \"community-operators-vfl8n\" (UID: \"89759ef8-3b0b-4c78-85e0-c34963a08a21\") " pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:07 crc kubenswrapper[4731]: I1125 15:39:07.741252 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89759ef8-3b0b-4c78-85e0-c34963a08a21-utilities\") pod \"community-operators-vfl8n\" (UID: \"89759ef8-3b0b-4c78-85e0-c34963a08a21\") " pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:07 crc kubenswrapper[4731]: I1125 15:39:07.741462 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89759ef8-3b0b-4c78-85e0-c34963a08a21-catalog-content\") pod \"community-operators-vfl8n\" (UID: \"89759ef8-3b0b-4c78-85e0-c34963a08a21\") " pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:07 crc kubenswrapper[4731]: I1125 15:39:07.844221 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42qrb\" (UniqueName: \"kubernetes.io/projected/89759ef8-3b0b-4c78-85e0-c34963a08a21-kube-api-access-42qrb\") pod \"community-operators-vfl8n\" (UID: \"89759ef8-3b0b-4c78-85e0-c34963a08a21\") " pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:07 crc kubenswrapper[4731]: I1125 15:39:07.844319 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89759ef8-3b0b-4c78-85e0-c34963a08a21-utilities\") pod \"community-operators-vfl8n\" (UID: \"89759ef8-3b0b-4c78-85e0-c34963a08a21\") " pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:07 crc kubenswrapper[4731]: I1125 15:39:07.844418 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89759ef8-3b0b-4c78-85e0-c34963a08a21-catalog-content\") pod \"community-operators-vfl8n\" (UID: \"89759ef8-3b0b-4c78-85e0-c34963a08a21\") " pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:07 crc kubenswrapper[4731]: I1125 15:39:07.845024 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89759ef8-3b0b-4c78-85e0-c34963a08a21-utilities\") pod \"community-operators-vfl8n\" (UID: \"89759ef8-3b0b-4c78-85e0-c34963a08a21\") " pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:07 crc kubenswrapper[4731]: I1125 15:39:07.845093 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89759ef8-3b0b-4c78-85e0-c34963a08a21-catalog-content\") pod \"community-operators-vfl8n\" (UID: \"89759ef8-3b0b-4c78-85e0-c34963a08a21\") " pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:07 crc kubenswrapper[4731]: I1125 15:39:07.875882 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42qrb\" (UniqueName: \"kubernetes.io/projected/89759ef8-3b0b-4c78-85e0-c34963a08a21-kube-api-access-42qrb\") pod \"community-operators-vfl8n\" (UID: \"89759ef8-3b0b-4c78-85e0-c34963a08a21\") " pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:07 crc kubenswrapper[4731]: I1125 15:39:07.955281 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:08 crc kubenswrapper[4731]: I1125 15:39:08.685772 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vfl8n"] Nov 25 15:39:08 crc kubenswrapper[4731]: W1125 15:39:08.688491 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89759ef8_3b0b_4c78_85e0_c34963a08a21.slice/crio-04b5a066074b4971e01f88e9819164a5abfc0fd39a1d6b6e087aa34055a08d99 WatchSource:0}: Error finding container 04b5a066074b4971e01f88e9819164a5abfc0fd39a1d6b6e087aa34055a08d99: Status 404 returned error can't find the container with id 04b5a066074b4971e01f88e9819164a5abfc0fd39a1d6b6e087aa34055a08d99 Nov 25 15:39:08 crc kubenswrapper[4731]: I1125 15:39:08.780723 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfl8n" event={"ID":"89759ef8-3b0b-4c78-85e0-c34963a08a21","Type":"ContainerStarted","Data":"04b5a066074b4971e01f88e9819164a5abfc0fd39a1d6b6e087aa34055a08d99"} Nov 25 15:39:09 crc kubenswrapper[4731]: I1125 15:39:09.804040 4731 generic.go:334] "Generic (PLEG): container finished" podID="c8e84641-1173-4f99-a3bd-6419a88d7882" containerID="c6057f27bc659e2a31d56516a7d09bf5dbc038c08402ef7ef072354fbbee6fad" exitCode=0 Nov 25 15:39:09 crc kubenswrapper[4731]: I1125 15:39:09.804344 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mngwl" event={"ID":"c8e84641-1173-4f99-a3bd-6419a88d7882","Type":"ContainerDied","Data":"c6057f27bc659e2a31d56516a7d09bf5dbc038c08402ef7ef072354fbbee6fad"} Nov 25 15:39:09 crc kubenswrapper[4731]: I1125 15:39:09.811342 4731 generic.go:334] "Generic (PLEG): container finished" podID="89759ef8-3b0b-4c78-85e0-c34963a08a21" containerID="7a1ad9c828de4c68d2b811de264fa48734cb4166b8db284d5d5660f67937a258" exitCode=0 Nov 25 15:39:09 crc kubenswrapper[4731]: I1125 15:39:09.811545 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfl8n" event={"ID":"89759ef8-3b0b-4c78-85e0-c34963a08a21","Type":"ContainerDied","Data":"7a1ad9c828de4c68d2b811de264fa48734cb4166b8db284d5d5660f67937a258"} Nov 25 15:39:10 crc kubenswrapper[4731]: I1125 15:39:10.825898 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mngwl" event={"ID":"c8e84641-1173-4f99-a3bd-6419a88d7882","Type":"ContainerStarted","Data":"c77df9c5d5ef075abacb2b75e99a081d6eb51488bd24e44a0be2e5990325eaf8"} Nov 25 15:39:10 crc kubenswrapper[4731]: I1125 15:39:10.828025 4731 generic.go:334] "Generic (PLEG): container finished" podID="89759ef8-3b0b-4c78-85e0-c34963a08a21" containerID="97181f5dab523f5e355c56a10511ffe54f8ae899919d9bd00142816940524fc4" exitCode=0 Nov 25 15:39:10 crc kubenswrapper[4731]: I1125 15:39:10.828060 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfl8n" event={"ID":"89759ef8-3b0b-4c78-85e0-c34963a08a21","Type":"ContainerDied","Data":"97181f5dab523f5e355c56a10511ffe54f8ae899919d9bd00142816940524fc4"} Nov 25 15:39:10 crc kubenswrapper[4731]: I1125 15:39:10.847879 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mngwl" podStartSLOduration=3.366618732 podStartE2EDuration="10.847851682s" podCreationTimestamp="2025-11-25 15:39:00 +0000 UTC" firstStartedPulling="2025-11-25 15:39:02.727062939 +0000 UTC m=+2701.474197369" lastFinishedPulling="2025-11-25 15:39:10.208295899 +0000 UTC m=+2708.955430319" observedRunningTime="2025-11-25 15:39:10.844363195 +0000 UTC m=+2709.591497635" watchObservedRunningTime="2025-11-25 15:39:10.847851682 +0000 UTC m=+2709.594986122" Nov 25 15:39:11 crc kubenswrapper[4731]: I1125 15:39:11.234480 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:11 crc kubenswrapper[4731]: I1125 15:39:11.234870 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:11 crc kubenswrapper[4731]: I1125 15:39:11.839986 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfl8n" event={"ID":"89759ef8-3b0b-4c78-85e0-c34963a08a21","Type":"ContainerStarted","Data":"ac0a243945a1909e2acf23f1d186ad161c21ebaab8793b43ea49fdbf9e6d1e58"} Nov 25 15:39:12 crc kubenswrapper[4731]: I1125 15:39:12.472206 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mngwl" podUID="c8e84641-1173-4f99-a3bd-6419a88d7882" containerName="registry-server" probeResult="failure" output=< Nov 25 15:39:12 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 15:39:12 crc kubenswrapper[4731]: > Nov 25 15:39:17 crc kubenswrapper[4731]: I1125 15:39:17.956044 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:17 crc kubenswrapper[4731]: I1125 15:39:17.956607 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:18 crc kubenswrapper[4731]: I1125 15:39:18.004288 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:18 crc kubenswrapper[4731]: I1125 15:39:18.026162 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vfl8n" podStartSLOduration=9.388965679 podStartE2EDuration="11.02613945s" podCreationTimestamp="2025-11-25 15:39:07 +0000 UTC" firstStartedPulling="2025-11-25 15:39:09.81560773 +0000 UTC m=+2708.562742170" lastFinishedPulling="2025-11-25 15:39:11.452781511 +0000 UTC m=+2710.199915941" observedRunningTime="2025-11-25 15:39:11.864525125 +0000 UTC m=+2710.611659555" watchObservedRunningTime="2025-11-25 15:39:18.02613945 +0000 UTC m=+2716.773273880" Nov 25 15:39:18 crc kubenswrapper[4731]: I1125 15:39:18.943972 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:19 crc kubenswrapper[4731]: I1125 15:39:19.008634 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vfl8n"] Nov 25 15:39:20 crc kubenswrapper[4731]: I1125 15:39:20.928945 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vfl8n" podUID="89759ef8-3b0b-4c78-85e0-c34963a08a21" containerName="registry-server" containerID="cri-o://ac0a243945a1909e2acf23f1d186ad161c21ebaab8793b43ea49fdbf9e6d1e58" gracePeriod=2 Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.448075 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.517621 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89759ef8-3b0b-4c78-85e0-c34963a08a21-catalog-content\") pod \"89759ef8-3b0b-4c78-85e0-c34963a08a21\" (UID: \"89759ef8-3b0b-4c78-85e0-c34963a08a21\") " Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.517976 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89759ef8-3b0b-4c78-85e0-c34963a08a21-utilities\") pod \"89759ef8-3b0b-4c78-85e0-c34963a08a21\" (UID: \"89759ef8-3b0b-4c78-85e0-c34963a08a21\") " Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.518136 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42qrb\" (UniqueName: \"kubernetes.io/projected/89759ef8-3b0b-4c78-85e0-c34963a08a21-kube-api-access-42qrb\") pod \"89759ef8-3b0b-4c78-85e0-c34963a08a21\" (UID: \"89759ef8-3b0b-4c78-85e0-c34963a08a21\") " Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.524292 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89759ef8-3b0b-4c78-85e0-c34963a08a21-utilities" (OuterVolumeSpecName: "utilities") pod "89759ef8-3b0b-4c78-85e0-c34963a08a21" (UID: "89759ef8-3b0b-4c78-85e0-c34963a08a21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.528047 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89759ef8-3b0b-4c78-85e0-c34963a08a21-kube-api-access-42qrb" (OuterVolumeSpecName: "kube-api-access-42qrb") pod "89759ef8-3b0b-4c78-85e0-c34963a08a21" (UID: "89759ef8-3b0b-4c78-85e0-c34963a08a21"). InnerVolumeSpecName "kube-api-access-42qrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.623468 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89759ef8-3b0b-4c78-85e0-c34963a08a21-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.623550 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42qrb\" (UniqueName: \"kubernetes.io/projected/89759ef8-3b0b-4c78-85e0-c34963a08a21-kube-api-access-42qrb\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.638316 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89759ef8-3b0b-4c78-85e0-c34963a08a21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89759ef8-3b0b-4c78-85e0-c34963a08a21" (UID: "89759ef8-3b0b-4c78-85e0-c34963a08a21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.725214 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89759ef8-3b0b-4c78-85e0-c34963a08a21-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.940110 4731 generic.go:334] "Generic (PLEG): container finished" podID="89759ef8-3b0b-4c78-85e0-c34963a08a21" containerID="ac0a243945a1909e2acf23f1d186ad161c21ebaab8793b43ea49fdbf9e6d1e58" exitCode=0 Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.940162 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfl8n" event={"ID":"89759ef8-3b0b-4c78-85e0-c34963a08a21","Type":"ContainerDied","Data":"ac0a243945a1909e2acf23f1d186ad161c21ebaab8793b43ea49fdbf9e6d1e58"} Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.940191 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vfl8n" event={"ID":"89759ef8-3b0b-4c78-85e0-c34963a08a21","Type":"ContainerDied","Data":"04b5a066074b4971e01f88e9819164a5abfc0fd39a1d6b6e087aa34055a08d99"} Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.940211 4731 scope.go:117] "RemoveContainer" containerID="ac0a243945a1909e2acf23f1d186ad161c21ebaab8793b43ea49fdbf9e6d1e58" Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.940385 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vfl8n" Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.975500 4731 scope.go:117] "RemoveContainer" containerID="97181f5dab523f5e355c56a10511ffe54f8ae899919d9bd00142816940524fc4" Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.988135 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vfl8n"] Nov 25 15:39:21 crc kubenswrapper[4731]: I1125 15:39:21.999792 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vfl8n"] Nov 25 15:39:22 crc kubenswrapper[4731]: I1125 15:39:22.048797 4731 scope.go:117] "RemoveContainer" containerID="7a1ad9c828de4c68d2b811de264fa48734cb4166b8db284d5d5660f67937a258" Nov 25 15:39:22 crc kubenswrapper[4731]: I1125 15:39:22.096793 4731 scope.go:117] "RemoveContainer" containerID="ac0a243945a1909e2acf23f1d186ad161c21ebaab8793b43ea49fdbf9e6d1e58" Nov 25 15:39:22 crc kubenswrapper[4731]: E1125 15:39:22.097326 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac0a243945a1909e2acf23f1d186ad161c21ebaab8793b43ea49fdbf9e6d1e58\": container with ID starting with ac0a243945a1909e2acf23f1d186ad161c21ebaab8793b43ea49fdbf9e6d1e58 not found: ID does not exist" containerID="ac0a243945a1909e2acf23f1d186ad161c21ebaab8793b43ea49fdbf9e6d1e58" Nov 25 15:39:22 crc kubenswrapper[4731]: I1125 15:39:22.097373 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac0a243945a1909e2acf23f1d186ad161c21ebaab8793b43ea49fdbf9e6d1e58"} err="failed to get container status \"ac0a243945a1909e2acf23f1d186ad161c21ebaab8793b43ea49fdbf9e6d1e58\": rpc error: code = NotFound desc = could not find container \"ac0a243945a1909e2acf23f1d186ad161c21ebaab8793b43ea49fdbf9e6d1e58\": container with ID starting with ac0a243945a1909e2acf23f1d186ad161c21ebaab8793b43ea49fdbf9e6d1e58 not found: ID does not exist" Nov 25 15:39:22 crc kubenswrapper[4731]: I1125 15:39:22.097403 4731 scope.go:117] "RemoveContainer" containerID="97181f5dab523f5e355c56a10511ffe54f8ae899919d9bd00142816940524fc4" Nov 25 15:39:22 crc kubenswrapper[4731]: E1125 15:39:22.097875 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97181f5dab523f5e355c56a10511ffe54f8ae899919d9bd00142816940524fc4\": container with ID starting with 97181f5dab523f5e355c56a10511ffe54f8ae899919d9bd00142816940524fc4 not found: ID does not exist" containerID="97181f5dab523f5e355c56a10511ffe54f8ae899919d9bd00142816940524fc4" Nov 25 15:39:22 crc kubenswrapper[4731]: I1125 15:39:22.097922 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97181f5dab523f5e355c56a10511ffe54f8ae899919d9bd00142816940524fc4"} err="failed to get container status \"97181f5dab523f5e355c56a10511ffe54f8ae899919d9bd00142816940524fc4\": rpc error: code = NotFound desc = could not find container \"97181f5dab523f5e355c56a10511ffe54f8ae899919d9bd00142816940524fc4\": container with ID starting with 97181f5dab523f5e355c56a10511ffe54f8ae899919d9bd00142816940524fc4 not found: ID does not exist" Nov 25 15:39:22 crc kubenswrapper[4731]: I1125 15:39:22.097950 4731 scope.go:117] "RemoveContainer" containerID="7a1ad9c828de4c68d2b811de264fa48734cb4166b8db284d5d5660f67937a258" Nov 25 15:39:22 crc kubenswrapper[4731]: E1125 15:39:22.098302 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a1ad9c828de4c68d2b811de264fa48734cb4166b8db284d5d5660f67937a258\": container with ID starting with 7a1ad9c828de4c68d2b811de264fa48734cb4166b8db284d5d5660f67937a258 not found: ID does not exist" containerID="7a1ad9c828de4c68d2b811de264fa48734cb4166b8db284d5d5660f67937a258" Nov 25 15:39:22 crc kubenswrapper[4731]: I1125 15:39:22.098324 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a1ad9c828de4c68d2b811de264fa48734cb4166b8db284d5d5660f67937a258"} err="failed to get container status \"7a1ad9c828de4c68d2b811de264fa48734cb4166b8db284d5d5660f67937a258\": rpc error: code = NotFound desc = could not find container \"7a1ad9c828de4c68d2b811de264fa48734cb4166b8db284d5d5660f67937a258\": container with ID starting with 7a1ad9c828de4c68d2b811de264fa48734cb4166b8db284d5d5660f67937a258 not found: ID does not exist" Nov 25 15:39:22 crc kubenswrapper[4731]: I1125 15:39:22.292917 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mngwl" podUID="c8e84641-1173-4f99-a3bd-6419a88d7882" containerName="registry-server" probeResult="failure" output=< Nov 25 15:39:22 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 15:39:22 crc kubenswrapper[4731]: > Nov 25 15:39:22 crc kubenswrapper[4731]: I1125 15:39:22.834221 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89759ef8-3b0b-4c78-85e0-c34963a08a21" path="/var/lib/kubelet/pods/89759ef8-3b0b-4c78-85e0-c34963a08a21/volumes" Nov 25 15:39:22 crc kubenswrapper[4731]: I1125 15:39:22.953487 4731 generic.go:334] "Generic (PLEG): container finished" podID="6ca6a8fb-1ee3-40d3-b8c7-733438b20c35" containerID="3f41381815a08fe681c53f08fe8e17343a15a9fbb33823f81fd70e3561a8a1a6" exitCode=0 Nov 25 15:39:22 crc kubenswrapper[4731]: I1125 15:39:22.953554 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" event={"ID":"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35","Type":"ContainerDied","Data":"3f41381815a08fe681c53f08fe8e17343a15a9fbb33823f81fd70e3561a8a1a6"} Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.485551 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.579113 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-ssh-key\") pod \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.579214 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-migration-ssh-key-1\") pod \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.579285 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-extra-config-0\") pod \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.579324 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-migration-ssh-key-0\") pod \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.579351 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-combined-ca-bundle\") pod \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.579386 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-cell1-compute-config-1\") pod \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.579410 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zkj6\" (UniqueName: \"kubernetes.io/projected/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-kube-api-access-4zkj6\") pod \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.579607 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-inventory\") pod \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.579676 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-cell1-compute-config-0\") pod \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\" (UID: \"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35\") " Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.603287 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-kube-api-access-4zkj6" (OuterVolumeSpecName: "kube-api-access-4zkj6") pod "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35" (UID: "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35"). InnerVolumeSpecName "kube-api-access-4zkj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.603968 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35" (UID: "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.608344 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35" (UID: "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.611362 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35" (UID: "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.612867 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-inventory" (OuterVolumeSpecName: "inventory") pod "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35" (UID: "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.613154 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35" (UID: "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.620851 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35" (UID: "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.623079 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35" (UID: "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.635242 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35" (UID: "6ca6a8fb-1ee3-40d3-b8c7-733438b20c35"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.681601 4731 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.681649 4731 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.681663 4731 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.681675 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zkj6\" (UniqueName: \"kubernetes.io/projected/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-kube-api-access-4zkj6\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.681690 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.681702 4731 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.681714 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.681726 4731 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.681741 4731 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/6ca6a8fb-1ee3-40d3-b8c7-733438b20c35-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.975098 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" event={"ID":"6ca6a8fb-1ee3-40d3-b8c7-733438b20c35","Type":"ContainerDied","Data":"41ec32f8eae68b198232684e9b765548d03bac41e74407c546bf5f1216a3d8d5"} Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.975370 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41ec32f8eae68b198232684e9b765548d03bac41e74407c546bf5f1216a3d8d5" Nov 25 15:39:24 crc kubenswrapper[4731]: I1125 15:39:24.975331 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-9jzw8" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.097377 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf"] Nov 25 15:39:25 crc kubenswrapper[4731]: E1125 15:39:25.097846 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89759ef8-3b0b-4c78-85e0-c34963a08a21" containerName="extract-content" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.097869 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="89759ef8-3b0b-4c78-85e0-c34963a08a21" containerName="extract-content" Nov 25 15:39:25 crc kubenswrapper[4731]: E1125 15:39:25.097896 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89759ef8-3b0b-4c78-85e0-c34963a08a21" containerName="extract-utilities" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.097905 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="89759ef8-3b0b-4c78-85e0-c34963a08a21" containerName="extract-utilities" Nov 25 15:39:25 crc kubenswrapper[4731]: E1125 15:39:25.097919 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ca6a8fb-1ee3-40d3-b8c7-733438b20c35" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.097927 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ca6a8fb-1ee3-40d3-b8c7-733438b20c35" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 15:39:25 crc kubenswrapper[4731]: E1125 15:39:25.097949 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89759ef8-3b0b-4c78-85e0-c34963a08a21" containerName="registry-server" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.097957 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="89759ef8-3b0b-4c78-85e0-c34963a08a21" containerName="registry-server" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.098172 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ca6a8fb-1ee3-40d3-b8c7-733438b20c35" containerName="nova-edpm-deployment-openstack-edpm-ipam" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.098184 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="89759ef8-3b0b-4c78-85e0-c34963a08a21" containerName="registry-server" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.098903 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.101962 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.102078 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.102152 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.102507 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-n86dg" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.102876 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.116492 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf"] Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.193464 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.193707 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.193867 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.193962 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.194036 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.194084 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9msgg\" (UniqueName: \"kubernetes.io/projected/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-kube-api-access-9msgg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.194166 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.295306 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.295361 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9msgg\" (UniqueName: \"kubernetes.io/projected/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-kube-api-access-9msgg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.295439 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.295502 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.295558 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.295607 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.296186 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.299913 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.299913 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.300168 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.300335 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.302939 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.324697 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.328720 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9msgg\" (UniqueName: \"kubernetes.io/projected/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-kube-api-access-9msgg\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-722hf\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:25 crc kubenswrapper[4731]: I1125 15:39:25.417545 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:39:26 crc kubenswrapper[4731]: W1125 15:39:26.041996 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fbd03b7_1cca_4719_ae8b_e532f0d61c9e.slice/crio-d9394e237869cd1540be045ae91083e4ad70a33b2c5f7bdab6a5f75864bc4e52 WatchSource:0}: Error finding container d9394e237869cd1540be045ae91083e4ad70a33b2c5f7bdab6a5f75864bc4e52: Status 404 returned error can't find the container with id d9394e237869cd1540be045ae91083e4ad70a33b2c5f7bdab6a5f75864bc4e52 Nov 25 15:39:26 crc kubenswrapper[4731]: I1125 15:39:26.042987 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf"] Nov 25 15:39:26 crc kubenswrapper[4731]: I1125 15:39:26.993975 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" event={"ID":"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e","Type":"ContainerStarted","Data":"f09054ccaa8acbc75681169a4b35d6598002698292483dd7f9572974a2d52359"} Nov 25 15:39:26 crc kubenswrapper[4731]: I1125 15:39:26.994321 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" event={"ID":"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e","Type":"ContainerStarted","Data":"d9394e237869cd1540be045ae91083e4ad70a33b2c5f7bdab6a5f75864bc4e52"} Nov 25 15:39:27 crc kubenswrapper[4731]: I1125 15:39:27.014420 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" podStartSLOduration=1.617416071 podStartE2EDuration="2.014381667s" podCreationTimestamp="2025-11-25 15:39:25 +0000 UTC" firstStartedPulling="2025-11-25 15:39:26.044880809 +0000 UTC m=+2724.792015239" lastFinishedPulling="2025-11-25 15:39:26.441846405 +0000 UTC m=+2725.188980835" observedRunningTime="2025-11-25 15:39:27.010841019 +0000 UTC m=+2725.757975479" watchObservedRunningTime="2025-11-25 15:39:27.014381667 +0000 UTC m=+2725.761516097" Nov 25 15:39:31 crc kubenswrapper[4731]: I1125 15:39:31.297052 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:31 crc kubenswrapper[4731]: I1125 15:39:31.355557 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:32 crc kubenswrapper[4731]: I1125 15:39:32.067072 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mngwl"] Nov 25 15:39:33 crc kubenswrapper[4731]: I1125 15:39:33.046406 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mngwl" podUID="c8e84641-1173-4f99-a3bd-6419a88d7882" containerName="registry-server" containerID="cri-o://c77df9c5d5ef075abacb2b75e99a081d6eb51488bd24e44a0be2e5990325eaf8" gracePeriod=2 Nov 25 15:39:34 crc kubenswrapper[4731]: I1125 15:39:34.058640 4731 generic.go:334] "Generic (PLEG): container finished" podID="c8e84641-1173-4f99-a3bd-6419a88d7882" containerID="c77df9c5d5ef075abacb2b75e99a081d6eb51488bd24e44a0be2e5990325eaf8" exitCode=0 Nov 25 15:39:34 crc kubenswrapper[4731]: I1125 15:39:34.059069 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mngwl" event={"ID":"c8e84641-1173-4f99-a3bd-6419a88d7882","Type":"ContainerDied","Data":"c77df9c5d5ef075abacb2b75e99a081d6eb51488bd24e44a0be2e5990325eaf8"} Nov 25 15:39:34 crc kubenswrapper[4731]: I1125 15:39:34.178097 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:34 crc kubenswrapper[4731]: I1125 15:39:34.296429 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh8dw\" (UniqueName: \"kubernetes.io/projected/c8e84641-1173-4f99-a3bd-6419a88d7882-kube-api-access-xh8dw\") pod \"c8e84641-1173-4f99-a3bd-6419a88d7882\" (UID: \"c8e84641-1173-4f99-a3bd-6419a88d7882\") " Nov 25 15:39:34 crc kubenswrapper[4731]: I1125 15:39:34.296852 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e84641-1173-4f99-a3bd-6419a88d7882-catalog-content\") pod \"c8e84641-1173-4f99-a3bd-6419a88d7882\" (UID: \"c8e84641-1173-4f99-a3bd-6419a88d7882\") " Nov 25 15:39:34 crc kubenswrapper[4731]: I1125 15:39:34.296921 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e84641-1173-4f99-a3bd-6419a88d7882-utilities\") pod \"c8e84641-1173-4f99-a3bd-6419a88d7882\" (UID: \"c8e84641-1173-4f99-a3bd-6419a88d7882\") " Nov 25 15:39:34 crc kubenswrapper[4731]: I1125 15:39:34.299136 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e84641-1173-4f99-a3bd-6419a88d7882-utilities" (OuterVolumeSpecName: "utilities") pod "c8e84641-1173-4f99-a3bd-6419a88d7882" (UID: "c8e84641-1173-4f99-a3bd-6419a88d7882"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:39:34 crc kubenswrapper[4731]: I1125 15:39:34.307302 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8e84641-1173-4f99-a3bd-6419a88d7882-kube-api-access-xh8dw" (OuterVolumeSpecName: "kube-api-access-xh8dw") pod "c8e84641-1173-4f99-a3bd-6419a88d7882" (UID: "c8e84641-1173-4f99-a3bd-6419a88d7882"). InnerVolumeSpecName "kube-api-access-xh8dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:39:34 crc kubenswrapper[4731]: I1125 15:39:34.400274 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8e84641-1173-4f99-a3bd-6419a88d7882-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:34 crc kubenswrapper[4731]: I1125 15:39:34.400307 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh8dw\" (UniqueName: \"kubernetes.io/projected/c8e84641-1173-4f99-a3bd-6419a88d7882-kube-api-access-xh8dw\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:34 crc kubenswrapper[4731]: I1125 15:39:34.442871 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8e84641-1173-4f99-a3bd-6419a88d7882-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8e84641-1173-4f99-a3bd-6419a88d7882" (UID: "c8e84641-1173-4f99-a3bd-6419a88d7882"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:39:34 crc kubenswrapper[4731]: I1125 15:39:34.502581 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8e84641-1173-4f99-a3bd-6419a88d7882-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:39:35 crc kubenswrapper[4731]: I1125 15:39:35.069594 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mngwl" event={"ID":"c8e84641-1173-4f99-a3bd-6419a88d7882","Type":"ContainerDied","Data":"34b1e7de62a51b12247baa56c15e80345e2f725e7fe79e1d1fece362f33caf48"} Nov 25 15:39:35 crc kubenswrapper[4731]: I1125 15:39:35.069648 4731 scope.go:117] "RemoveContainer" containerID="c77df9c5d5ef075abacb2b75e99a081d6eb51488bd24e44a0be2e5990325eaf8" Nov 25 15:39:35 crc kubenswrapper[4731]: I1125 15:39:35.069670 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mngwl" Nov 25 15:39:35 crc kubenswrapper[4731]: I1125 15:39:35.095342 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mngwl"] Nov 25 15:39:35 crc kubenswrapper[4731]: I1125 15:39:35.096068 4731 scope.go:117] "RemoveContainer" containerID="c6057f27bc659e2a31d56516a7d09bf5dbc038c08402ef7ef072354fbbee6fad" Nov 25 15:39:35 crc kubenswrapper[4731]: I1125 15:39:35.104741 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mngwl"] Nov 25 15:39:35 crc kubenswrapper[4731]: I1125 15:39:35.116464 4731 scope.go:117] "RemoveContainer" containerID="8d240f1c6f60f02b790076250962ffdea3ececcabeb4200ae2adbba5399ea786" Nov 25 15:39:36 crc kubenswrapper[4731]: I1125 15:39:36.825134 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8e84641-1173-4f99-a3bd-6419a88d7882" path="/var/lib/kubelet/pods/c8e84641-1173-4f99-a3bd-6419a88d7882/volumes" Nov 25 15:39:50 crc kubenswrapper[4731]: I1125 15:39:50.040385 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:39:50 crc kubenswrapper[4731]: I1125 15:39:50.040919 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:40:20 crc kubenswrapper[4731]: I1125 15:40:20.040101 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:40:20 crc kubenswrapper[4731]: I1125 15:40:20.040552 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:40:50 crc kubenswrapper[4731]: I1125 15:40:50.039705 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:40:50 crc kubenswrapper[4731]: I1125 15:40:50.040315 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:40:50 crc kubenswrapper[4731]: I1125 15:40:50.040367 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 15:40:50 crc kubenswrapper[4731]: I1125 15:40:50.041119 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a06ff1372683d0ca6f9fb8f4465bf8dffb031a0b1135a7781aae010c58616a04"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:40:50 crc kubenswrapper[4731]: I1125 15:40:50.041186 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://a06ff1372683d0ca6f9fb8f4465bf8dffb031a0b1135a7781aae010c58616a04" gracePeriod=600 Nov 25 15:40:50 crc kubenswrapper[4731]: I1125 15:40:50.766779 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="a06ff1372683d0ca6f9fb8f4465bf8dffb031a0b1135a7781aae010c58616a04" exitCode=0 Nov 25 15:40:50 crc kubenswrapper[4731]: I1125 15:40:50.766850 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"a06ff1372683d0ca6f9fb8f4465bf8dffb031a0b1135a7781aae010c58616a04"} Nov 25 15:40:50 crc kubenswrapper[4731]: I1125 15:40:50.767145 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249"} Nov 25 15:40:50 crc kubenswrapper[4731]: I1125 15:40:50.767167 4731 scope.go:117] "RemoveContainer" containerID="2c542e83ff0053d2acfc960a432792c04b514f3e32fe3d1ce616e58eb8c08bfc" Nov 25 15:41:27 crc kubenswrapper[4731]: I1125 15:41:27.713405 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sgd29"] Nov 25 15:41:27 crc kubenswrapper[4731]: E1125 15:41:27.714422 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e84641-1173-4f99-a3bd-6419a88d7882" containerName="extract-utilities" Nov 25 15:41:27 crc kubenswrapper[4731]: I1125 15:41:27.714442 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e84641-1173-4f99-a3bd-6419a88d7882" containerName="extract-utilities" Nov 25 15:41:27 crc kubenswrapper[4731]: E1125 15:41:27.714478 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e84641-1173-4f99-a3bd-6419a88d7882" containerName="extract-content" Nov 25 15:41:27 crc kubenswrapper[4731]: I1125 15:41:27.714488 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e84641-1173-4f99-a3bd-6419a88d7882" containerName="extract-content" Nov 25 15:41:27 crc kubenswrapper[4731]: E1125 15:41:27.714502 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8e84641-1173-4f99-a3bd-6419a88d7882" containerName="registry-server" Nov 25 15:41:27 crc kubenswrapper[4731]: I1125 15:41:27.714509 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8e84641-1173-4f99-a3bd-6419a88d7882" containerName="registry-server" Nov 25 15:41:27 crc kubenswrapper[4731]: I1125 15:41:27.714738 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8e84641-1173-4f99-a3bd-6419a88d7882" containerName="registry-server" Nov 25 15:41:27 crc kubenswrapper[4731]: I1125 15:41:27.716736 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:27 crc kubenswrapper[4731]: I1125 15:41:27.728893 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sgd29"] Nov 25 15:41:27 crc kubenswrapper[4731]: I1125 15:41:27.839457 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47wdb\" (UniqueName: \"kubernetes.io/projected/e09a8be0-e412-4be6-91f9-7c5b05f11444-kube-api-access-47wdb\") pod \"certified-operators-sgd29\" (UID: \"e09a8be0-e412-4be6-91f9-7c5b05f11444\") " pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:27 crc kubenswrapper[4731]: I1125 15:41:27.839511 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e09a8be0-e412-4be6-91f9-7c5b05f11444-utilities\") pod \"certified-operators-sgd29\" (UID: \"e09a8be0-e412-4be6-91f9-7c5b05f11444\") " pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:27 crc kubenswrapper[4731]: I1125 15:41:27.839551 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e09a8be0-e412-4be6-91f9-7c5b05f11444-catalog-content\") pod \"certified-operators-sgd29\" (UID: \"e09a8be0-e412-4be6-91f9-7c5b05f11444\") " pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:27 crc kubenswrapper[4731]: I1125 15:41:27.942028 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47wdb\" (UniqueName: \"kubernetes.io/projected/e09a8be0-e412-4be6-91f9-7c5b05f11444-kube-api-access-47wdb\") pod \"certified-operators-sgd29\" (UID: \"e09a8be0-e412-4be6-91f9-7c5b05f11444\") " pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:27 crc kubenswrapper[4731]: I1125 15:41:27.942102 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e09a8be0-e412-4be6-91f9-7c5b05f11444-utilities\") pod \"certified-operators-sgd29\" (UID: \"e09a8be0-e412-4be6-91f9-7c5b05f11444\") " pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:27 crc kubenswrapper[4731]: I1125 15:41:27.942147 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e09a8be0-e412-4be6-91f9-7c5b05f11444-catalog-content\") pod \"certified-operators-sgd29\" (UID: \"e09a8be0-e412-4be6-91f9-7c5b05f11444\") " pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:27 crc kubenswrapper[4731]: I1125 15:41:27.942880 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e09a8be0-e412-4be6-91f9-7c5b05f11444-catalog-content\") pod \"certified-operators-sgd29\" (UID: \"e09a8be0-e412-4be6-91f9-7c5b05f11444\") " pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:27 crc kubenswrapper[4731]: I1125 15:41:27.942916 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e09a8be0-e412-4be6-91f9-7c5b05f11444-utilities\") pod \"certified-operators-sgd29\" (UID: \"e09a8be0-e412-4be6-91f9-7c5b05f11444\") " pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:27 crc kubenswrapper[4731]: I1125 15:41:27.962774 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47wdb\" (UniqueName: \"kubernetes.io/projected/e09a8be0-e412-4be6-91f9-7c5b05f11444-kube-api-access-47wdb\") pod \"certified-operators-sgd29\" (UID: \"e09a8be0-e412-4be6-91f9-7c5b05f11444\") " pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:28 crc kubenswrapper[4731]: I1125 15:41:28.041282 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:28 crc kubenswrapper[4731]: I1125 15:41:28.623717 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sgd29"] Nov 25 15:41:29 crc kubenswrapper[4731]: I1125 15:41:29.125704 4731 generic.go:334] "Generic (PLEG): container finished" podID="e09a8be0-e412-4be6-91f9-7c5b05f11444" containerID="61f5b1b932a07dc00f96abc9ba7c4b0742af9552c0fe06a98c9277ce52ffd4d8" exitCode=0 Nov 25 15:41:29 crc kubenswrapper[4731]: I1125 15:41:29.125768 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgd29" event={"ID":"e09a8be0-e412-4be6-91f9-7c5b05f11444","Type":"ContainerDied","Data":"61f5b1b932a07dc00f96abc9ba7c4b0742af9552c0fe06a98c9277ce52ffd4d8"} Nov 25 15:41:29 crc kubenswrapper[4731]: I1125 15:41:29.126019 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgd29" event={"ID":"e09a8be0-e412-4be6-91f9-7c5b05f11444","Type":"ContainerStarted","Data":"9807e5ccb7408d37c9da78fcf91e3d7b094a8971137145b72839802e803bad97"} Nov 25 15:41:29 crc kubenswrapper[4731]: I1125 15:41:29.130090 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:41:30 crc kubenswrapper[4731]: I1125 15:41:30.136774 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgd29" event={"ID":"e09a8be0-e412-4be6-91f9-7c5b05f11444","Type":"ContainerStarted","Data":"f629b0da7e8cf7555b7b6969243cec9e589d53e6cd2a548bf10ba51dfbeac9d0"} Nov 25 15:41:31 crc kubenswrapper[4731]: I1125 15:41:31.145854 4731 generic.go:334] "Generic (PLEG): container finished" podID="e09a8be0-e412-4be6-91f9-7c5b05f11444" containerID="f629b0da7e8cf7555b7b6969243cec9e589d53e6cd2a548bf10ba51dfbeac9d0" exitCode=0 Nov 25 15:41:31 crc kubenswrapper[4731]: I1125 15:41:31.145899 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgd29" event={"ID":"e09a8be0-e412-4be6-91f9-7c5b05f11444","Type":"ContainerDied","Data":"f629b0da7e8cf7555b7b6969243cec9e589d53e6cd2a548bf10ba51dfbeac9d0"} Nov 25 15:41:32 crc kubenswrapper[4731]: I1125 15:41:32.157062 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgd29" event={"ID":"e09a8be0-e412-4be6-91f9-7c5b05f11444","Type":"ContainerStarted","Data":"67537ee368b997237b2cf95c4421777de8eda4208afaedbaf85b51f19fac29b2"} Nov 25 15:41:32 crc kubenswrapper[4731]: I1125 15:41:32.185662 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sgd29" podStartSLOduration=2.715408484 podStartE2EDuration="5.185641481s" podCreationTimestamp="2025-11-25 15:41:27 +0000 UTC" firstStartedPulling="2025-11-25 15:41:29.129833057 +0000 UTC m=+2847.876967487" lastFinishedPulling="2025-11-25 15:41:31.600066054 +0000 UTC m=+2850.347200484" observedRunningTime="2025-11-25 15:41:32.175001466 +0000 UTC m=+2850.922135906" watchObservedRunningTime="2025-11-25 15:41:32.185641481 +0000 UTC m=+2850.932775911" Nov 25 15:41:38 crc kubenswrapper[4731]: I1125 15:41:38.041778 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:38 crc kubenswrapper[4731]: I1125 15:41:38.043373 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:38 crc kubenswrapper[4731]: I1125 15:41:38.102491 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:38 crc kubenswrapper[4731]: I1125 15:41:38.267122 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:38 crc kubenswrapper[4731]: I1125 15:41:38.346070 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sgd29"] Nov 25 15:41:40 crc kubenswrapper[4731]: I1125 15:41:40.228298 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sgd29" podUID="e09a8be0-e412-4be6-91f9-7c5b05f11444" containerName="registry-server" containerID="cri-o://67537ee368b997237b2cf95c4421777de8eda4208afaedbaf85b51f19fac29b2" gracePeriod=2 Nov 25 15:41:40 crc kubenswrapper[4731]: I1125 15:41:40.688172 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:40 crc kubenswrapper[4731]: I1125 15:41:40.828254 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47wdb\" (UniqueName: \"kubernetes.io/projected/e09a8be0-e412-4be6-91f9-7c5b05f11444-kube-api-access-47wdb\") pod \"e09a8be0-e412-4be6-91f9-7c5b05f11444\" (UID: \"e09a8be0-e412-4be6-91f9-7c5b05f11444\") " Nov 25 15:41:40 crc kubenswrapper[4731]: I1125 15:41:40.828317 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e09a8be0-e412-4be6-91f9-7c5b05f11444-utilities\") pod \"e09a8be0-e412-4be6-91f9-7c5b05f11444\" (UID: \"e09a8be0-e412-4be6-91f9-7c5b05f11444\") " Nov 25 15:41:40 crc kubenswrapper[4731]: I1125 15:41:40.828398 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e09a8be0-e412-4be6-91f9-7c5b05f11444-catalog-content\") pod \"e09a8be0-e412-4be6-91f9-7c5b05f11444\" (UID: \"e09a8be0-e412-4be6-91f9-7c5b05f11444\") " Nov 25 15:41:40 crc kubenswrapper[4731]: I1125 15:41:40.829371 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e09a8be0-e412-4be6-91f9-7c5b05f11444-utilities" (OuterVolumeSpecName: "utilities") pod "e09a8be0-e412-4be6-91f9-7c5b05f11444" (UID: "e09a8be0-e412-4be6-91f9-7c5b05f11444"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:41:40 crc kubenswrapper[4731]: I1125 15:41:40.835486 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e09a8be0-e412-4be6-91f9-7c5b05f11444-kube-api-access-47wdb" (OuterVolumeSpecName: "kube-api-access-47wdb") pod "e09a8be0-e412-4be6-91f9-7c5b05f11444" (UID: "e09a8be0-e412-4be6-91f9-7c5b05f11444"). InnerVolumeSpecName "kube-api-access-47wdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:41:40 crc kubenswrapper[4731]: I1125 15:41:40.879573 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e09a8be0-e412-4be6-91f9-7c5b05f11444-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e09a8be0-e412-4be6-91f9-7c5b05f11444" (UID: "e09a8be0-e412-4be6-91f9-7c5b05f11444"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:41:40 crc kubenswrapper[4731]: I1125 15:41:40.930887 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47wdb\" (UniqueName: \"kubernetes.io/projected/e09a8be0-e412-4be6-91f9-7c5b05f11444-kube-api-access-47wdb\") on node \"crc\" DevicePath \"\"" Nov 25 15:41:40 crc kubenswrapper[4731]: I1125 15:41:40.930939 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e09a8be0-e412-4be6-91f9-7c5b05f11444-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:41:40 crc kubenswrapper[4731]: I1125 15:41:40.930950 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e09a8be0-e412-4be6-91f9-7c5b05f11444-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:41:41 crc kubenswrapper[4731]: I1125 15:41:41.239845 4731 generic.go:334] "Generic (PLEG): container finished" podID="e09a8be0-e412-4be6-91f9-7c5b05f11444" containerID="67537ee368b997237b2cf95c4421777de8eda4208afaedbaf85b51f19fac29b2" exitCode=0 Nov 25 15:41:41 crc kubenswrapper[4731]: I1125 15:41:41.239920 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgd29" event={"ID":"e09a8be0-e412-4be6-91f9-7c5b05f11444","Type":"ContainerDied","Data":"67537ee368b997237b2cf95c4421777de8eda4208afaedbaf85b51f19fac29b2"} Nov 25 15:41:41 crc kubenswrapper[4731]: I1125 15:41:41.239953 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sgd29" event={"ID":"e09a8be0-e412-4be6-91f9-7c5b05f11444","Type":"ContainerDied","Data":"9807e5ccb7408d37c9da78fcf91e3d7b094a8971137145b72839802e803bad97"} Nov 25 15:41:41 crc kubenswrapper[4731]: I1125 15:41:41.239957 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sgd29" Nov 25 15:41:41 crc kubenswrapper[4731]: I1125 15:41:41.239974 4731 scope.go:117] "RemoveContainer" containerID="67537ee368b997237b2cf95c4421777de8eda4208afaedbaf85b51f19fac29b2" Nov 25 15:41:41 crc kubenswrapper[4731]: I1125 15:41:41.272344 4731 scope.go:117] "RemoveContainer" containerID="f629b0da7e8cf7555b7b6969243cec9e589d53e6cd2a548bf10ba51dfbeac9d0" Nov 25 15:41:41 crc kubenswrapper[4731]: I1125 15:41:41.278613 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sgd29"] Nov 25 15:41:41 crc kubenswrapper[4731]: I1125 15:41:41.288808 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sgd29"] Nov 25 15:41:41 crc kubenswrapper[4731]: I1125 15:41:41.295439 4731 scope.go:117] "RemoveContainer" containerID="61f5b1b932a07dc00f96abc9ba7c4b0742af9552c0fe06a98c9277ce52ffd4d8" Nov 25 15:41:41 crc kubenswrapper[4731]: I1125 15:41:41.339398 4731 scope.go:117] "RemoveContainer" containerID="67537ee368b997237b2cf95c4421777de8eda4208afaedbaf85b51f19fac29b2" Nov 25 15:41:41 crc kubenswrapper[4731]: E1125 15:41:41.339883 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67537ee368b997237b2cf95c4421777de8eda4208afaedbaf85b51f19fac29b2\": container with ID starting with 67537ee368b997237b2cf95c4421777de8eda4208afaedbaf85b51f19fac29b2 not found: ID does not exist" containerID="67537ee368b997237b2cf95c4421777de8eda4208afaedbaf85b51f19fac29b2" Nov 25 15:41:41 crc kubenswrapper[4731]: I1125 15:41:41.339926 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67537ee368b997237b2cf95c4421777de8eda4208afaedbaf85b51f19fac29b2"} err="failed to get container status \"67537ee368b997237b2cf95c4421777de8eda4208afaedbaf85b51f19fac29b2\": rpc error: code = NotFound desc = could not find container \"67537ee368b997237b2cf95c4421777de8eda4208afaedbaf85b51f19fac29b2\": container with ID starting with 67537ee368b997237b2cf95c4421777de8eda4208afaedbaf85b51f19fac29b2 not found: ID does not exist" Nov 25 15:41:41 crc kubenswrapper[4731]: I1125 15:41:41.339951 4731 scope.go:117] "RemoveContainer" containerID="f629b0da7e8cf7555b7b6969243cec9e589d53e6cd2a548bf10ba51dfbeac9d0" Nov 25 15:41:41 crc kubenswrapper[4731]: E1125 15:41:41.340333 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f629b0da7e8cf7555b7b6969243cec9e589d53e6cd2a548bf10ba51dfbeac9d0\": container with ID starting with f629b0da7e8cf7555b7b6969243cec9e589d53e6cd2a548bf10ba51dfbeac9d0 not found: ID does not exist" containerID="f629b0da7e8cf7555b7b6969243cec9e589d53e6cd2a548bf10ba51dfbeac9d0" Nov 25 15:41:41 crc kubenswrapper[4731]: I1125 15:41:41.340358 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f629b0da7e8cf7555b7b6969243cec9e589d53e6cd2a548bf10ba51dfbeac9d0"} err="failed to get container status \"f629b0da7e8cf7555b7b6969243cec9e589d53e6cd2a548bf10ba51dfbeac9d0\": rpc error: code = NotFound desc = could not find container \"f629b0da7e8cf7555b7b6969243cec9e589d53e6cd2a548bf10ba51dfbeac9d0\": container with ID starting with f629b0da7e8cf7555b7b6969243cec9e589d53e6cd2a548bf10ba51dfbeac9d0 not found: ID does not exist" Nov 25 15:41:41 crc kubenswrapper[4731]: I1125 15:41:41.340375 4731 scope.go:117] "RemoveContainer" containerID="61f5b1b932a07dc00f96abc9ba7c4b0742af9552c0fe06a98c9277ce52ffd4d8" Nov 25 15:41:41 crc kubenswrapper[4731]: E1125 15:41:41.340789 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61f5b1b932a07dc00f96abc9ba7c4b0742af9552c0fe06a98c9277ce52ffd4d8\": container with ID starting with 61f5b1b932a07dc00f96abc9ba7c4b0742af9552c0fe06a98c9277ce52ffd4d8 not found: ID does not exist" containerID="61f5b1b932a07dc00f96abc9ba7c4b0742af9552c0fe06a98c9277ce52ffd4d8" Nov 25 15:41:41 crc kubenswrapper[4731]: I1125 15:41:41.340840 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61f5b1b932a07dc00f96abc9ba7c4b0742af9552c0fe06a98c9277ce52ffd4d8"} err="failed to get container status \"61f5b1b932a07dc00f96abc9ba7c4b0742af9552c0fe06a98c9277ce52ffd4d8\": rpc error: code = NotFound desc = could not find container \"61f5b1b932a07dc00f96abc9ba7c4b0742af9552c0fe06a98c9277ce52ffd4d8\": container with ID starting with 61f5b1b932a07dc00f96abc9ba7c4b0742af9552c0fe06a98c9277ce52ffd4d8 not found: ID does not exist" Nov 25 15:41:42 crc kubenswrapper[4731]: I1125 15:41:42.829486 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e09a8be0-e412-4be6-91f9-7c5b05f11444" path="/var/lib/kubelet/pods/e09a8be0-e412-4be6-91f9-7c5b05f11444/volumes" Nov 25 15:41:51 crc kubenswrapper[4731]: I1125 15:41:51.340084 4731 generic.go:334] "Generic (PLEG): container finished" podID="4fbd03b7-1cca-4719-ae8b-e532f0d61c9e" containerID="f09054ccaa8acbc75681169a4b35d6598002698292483dd7f9572974a2d52359" exitCode=0 Nov 25 15:41:51 crc kubenswrapper[4731]: I1125 15:41:51.340168 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" event={"ID":"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e","Type":"ContainerDied","Data":"f09054ccaa8acbc75681169a4b35d6598002698292483dd7f9572974a2d52359"} Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.766410 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.861351 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-telemetry-combined-ca-bundle\") pod \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.861447 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-1\") pod \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.861481 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-2\") pod \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.861505 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-inventory\") pod \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.861553 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9msgg\" (UniqueName: \"kubernetes.io/projected/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-kube-api-access-9msgg\") pod \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.861576 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-0\") pod \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.861703 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ssh-key\") pod \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\" (UID: \"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e\") " Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.867260 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "4fbd03b7-1cca-4719-ae8b-e532f0d61c9e" (UID: "4fbd03b7-1cca-4719-ae8b-e532f0d61c9e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.867417 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-kube-api-access-9msgg" (OuterVolumeSpecName: "kube-api-access-9msgg") pod "4fbd03b7-1cca-4719-ae8b-e532f0d61c9e" (UID: "4fbd03b7-1cca-4719-ae8b-e532f0d61c9e"). InnerVolumeSpecName "kube-api-access-9msgg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.891740 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "4fbd03b7-1cca-4719-ae8b-e532f0d61c9e" (UID: "4fbd03b7-1cca-4719-ae8b-e532f0d61c9e"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.893002 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4fbd03b7-1cca-4719-ae8b-e532f0d61c9e" (UID: "4fbd03b7-1cca-4719-ae8b-e532f0d61c9e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.893391 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-inventory" (OuterVolumeSpecName: "inventory") pod "4fbd03b7-1cca-4719-ae8b-e532f0d61c9e" (UID: "4fbd03b7-1cca-4719-ae8b-e532f0d61c9e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.895959 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "4fbd03b7-1cca-4719-ae8b-e532f0d61c9e" (UID: "4fbd03b7-1cca-4719-ae8b-e532f0d61c9e"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.900178 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "4fbd03b7-1cca-4719-ae8b-e532f0d61c9e" (UID: "4fbd03b7-1cca-4719-ae8b-e532f0d61c9e"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.964181 4731 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.964225 4731 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-inventory\") on node \"crc\" DevicePath \"\"" Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.964239 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9msgg\" (UniqueName: \"kubernetes.io/projected/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-kube-api-access-9msgg\") on node \"crc\" DevicePath \"\"" Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.964253 4731 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.964266 4731 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ssh-key\") on node \"crc\" DevicePath \"\"" Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.964281 4731 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:41:52 crc kubenswrapper[4731]: I1125 15:41:52.964294 4731 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/4fbd03b7-1cca-4719-ae8b-e532f0d61c9e-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Nov 25 15:41:53 crc kubenswrapper[4731]: I1125 15:41:53.364671 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" event={"ID":"4fbd03b7-1cca-4719-ae8b-e532f0d61c9e","Type":"ContainerDied","Data":"d9394e237869cd1540be045ae91083e4ad70a33b2c5f7bdab6a5f75864bc4e52"} Nov 25 15:41:53 crc kubenswrapper[4731]: I1125 15:41:53.364722 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9394e237869cd1540be045ae91083e4ad70a33b2c5f7bdab6a5f75864bc4e52" Nov 25 15:41:53 crc kubenswrapper[4731]: I1125 15:41:53.365034 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-722hf" Nov 25 15:42:50 crc kubenswrapper[4731]: I1125 15:42:50.039712 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:42:50 crc kubenswrapper[4731]: I1125 15:42:50.040366 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:43:20 crc kubenswrapper[4731]: I1125 15:43:20.039785 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:43:20 crc kubenswrapper[4731]: I1125 15:43:20.040386 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:43:50 crc kubenswrapper[4731]: I1125 15:43:50.039864 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:43:50 crc kubenswrapper[4731]: I1125 15:43:50.040450 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:43:50 crc kubenswrapper[4731]: I1125 15:43:50.040514 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 15:43:50 crc kubenswrapper[4731]: I1125 15:43:50.041476 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:43:50 crc kubenswrapper[4731]: I1125 15:43:50.041556 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" gracePeriod=600 Nov 25 15:43:50 crc kubenswrapper[4731]: E1125 15:43:50.178105 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:43:50 crc kubenswrapper[4731]: I1125 15:43:50.436996 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" exitCode=0 Nov 25 15:43:50 crc kubenswrapper[4731]: I1125 15:43:50.437041 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249"} Nov 25 15:43:50 crc kubenswrapper[4731]: I1125 15:43:50.437377 4731 scope.go:117] "RemoveContainer" containerID="a06ff1372683d0ca6f9fb8f4465bf8dffb031a0b1135a7781aae010c58616a04" Nov 25 15:43:50 crc kubenswrapper[4731]: I1125 15:43:50.438748 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:43:50 crc kubenswrapper[4731]: E1125 15:43:50.439101 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:44:01 crc kubenswrapper[4731]: I1125 15:44:01.812923 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:44:01 crc kubenswrapper[4731]: E1125 15:44:01.814731 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:44:15 crc kubenswrapper[4731]: I1125 15:44:15.811775 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:44:15 crc kubenswrapper[4731]: E1125 15:44:15.812632 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:44:28 crc kubenswrapper[4731]: I1125 15:44:28.811901 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:44:28 crc kubenswrapper[4731]: E1125 15:44:28.812595 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:44:41 crc kubenswrapper[4731]: I1125 15:44:41.932314 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-75cd8d4b6c-kkqgj_089e8703-f71e-4f81-91af-19224ba7b1b4/manager/0.log" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.668876 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.669389 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="8815eebf-c8f8-49f9-94fb-5c3c454514d5" containerName="openstackclient" containerID="cri-o://73080d29582b58768d867598efda78dd573b99921dc2f053d056c0221ef69cf0" gracePeriod=2 Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.685705 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.742554 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 15:44:43 crc kubenswrapper[4731]: E1125 15:44:43.742980 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e09a8be0-e412-4be6-91f9-7c5b05f11444" containerName="extract-utilities" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.742997 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e09a8be0-e412-4be6-91f9-7c5b05f11444" containerName="extract-utilities" Nov 25 15:44:43 crc kubenswrapper[4731]: E1125 15:44:43.743014 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8815eebf-c8f8-49f9-94fb-5c3c454514d5" containerName="openstackclient" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.743020 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8815eebf-c8f8-49f9-94fb-5c3c454514d5" containerName="openstackclient" Nov 25 15:44:43 crc kubenswrapper[4731]: E1125 15:44:43.743038 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e09a8be0-e412-4be6-91f9-7c5b05f11444" containerName="extract-content" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.743044 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e09a8be0-e412-4be6-91f9-7c5b05f11444" containerName="extract-content" Nov 25 15:44:43 crc kubenswrapper[4731]: E1125 15:44:43.743066 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e09a8be0-e412-4be6-91f9-7c5b05f11444" containerName="registry-server" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.743072 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e09a8be0-e412-4be6-91f9-7c5b05f11444" containerName="registry-server" Nov 25 15:44:43 crc kubenswrapper[4731]: E1125 15:44:43.743086 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fbd03b7-1cca-4719-ae8b-e532f0d61c9e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.743093 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fbd03b7-1cca-4719-ae8b-e532f0d61c9e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.743286 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="8815eebf-c8f8-49f9-94fb-5c3c454514d5" containerName="openstackclient" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.743316 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e09a8be0-e412-4be6-91f9-7c5b05f11444" containerName="registry-server" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.743327 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fbd03b7-1cca-4719-ae8b-e532f0d61c9e" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.744000 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.753262 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.771589 4731 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8815eebf-c8f8-49f9-94fb-5c3c454514d5" podUID="18b7922d-f0bb-4b3c-90ff-c7456e477222" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.819072 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:44:43 crc kubenswrapper[4731]: E1125 15:44:43.819492 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.851407 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw5s2\" (UniqueName: \"kubernetes.io/projected/18b7922d-f0bb-4b3c-90ff-c7456e477222-kube-api-access-bw5s2\") pod \"openstackclient\" (UID: \"18b7922d-f0bb-4b3c-90ff-c7456e477222\") " pod="openstack/openstackclient" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.851483 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/18b7922d-f0bb-4b3c-90ff-c7456e477222-openstack-config-secret\") pod \"openstackclient\" (UID: \"18b7922d-f0bb-4b3c-90ff-c7456e477222\") " pod="openstack/openstackclient" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.851549 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/18b7922d-f0bb-4b3c-90ff-c7456e477222-openstack-config\") pod \"openstackclient\" (UID: \"18b7922d-f0bb-4b3c-90ff-c7456e477222\") " pod="openstack/openstackclient" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.851585 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b7922d-f0bb-4b3c-90ff-c7456e477222-combined-ca-bundle\") pod \"openstackclient\" (UID: \"18b7922d-f0bb-4b3c-90ff-c7456e477222\") " pod="openstack/openstackclient" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.953655 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/18b7922d-f0bb-4b3c-90ff-c7456e477222-openstack-config\") pod \"openstackclient\" (UID: \"18b7922d-f0bb-4b3c-90ff-c7456e477222\") " pod="openstack/openstackclient" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.953743 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b7922d-f0bb-4b3c-90ff-c7456e477222-combined-ca-bundle\") pod \"openstackclient\" (UID: \"18b7922d-f0bb-4b3c-90ff-c7456e477222\") " pod="openstack/openstackclient" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.953903 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw5s2\" (UniqueName: \"kubernetes.io/projected/18b7922d-f0bb-4b3c-90ff-c7456e477222-kube-api-access-bw5s2\") pod \"openstackclient\" (UID: \"18b7922d-f0bb-4b3c-90ff-c7456e477222\") " pod="openstack/openstackclient" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.953976 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/18b7922d-f0bb-4b3c-90ff-c7456e477222-openstack-config-secret\") pod \"openstackclient\" (UID: \"18b7922d-f0bb-4b3c-90ff-c7456e477222\") " pod="openstack/openstackclient" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.954627 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/18b7922d-f0bb-4b3c-90ff-c7456e477222-openstack-config\") pod \"openstackclient\" (UID: \"18b7922d-f0bb-4b3c-90ff-c7456e477222\") " pod="openstack/openstackclient" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.962037 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/18b7922d-f0bb-4b3c-90ff-c7456e477222-openstack-config-secret\") pod \"openstackclient\" (UID: \"18b7922d-f0bb-4b3c-90ff-c7456e477222\") " pod="openstack/openstackclient" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.962683 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b7922d-f0bb-4b3c-90ff-c7456e477222-combined-ca-bundle\") pod \"openstackclient\" (UID: \"18b7922d-f0bb-4b3c-90ff-c7456e477222\") " pod="openstack/openstackclient" Nov 25 15:44:43 crc kubenswrapper[4731]: I1125 15:44:43.972106 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw5s2\" (UniqueName: \"kubernetes.io/projected/18b7922d-f0bb-4b3c-90ff-c7456e477222-kube-api-access-bw5s2\") pod \"openstackclient\" (UID: \"18b7922d-f0bb-4b3c-90ff-c7456e477222\") " pod="openstack/openstackclient" Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.137935 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.658243 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 15:44:44 crc kubenswrapper[4731]: W1125 15:44:44.660040 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18b7922d_f0bb_4b3c_90ff_c7456e477222.slice/crio-d514f6d518512a891698b8341ee8a894434c8cb7c0632872092734b9afd13f15 WatchSource:0}: Error finding container d514f6d518512a891698b8341ee8a894434c8cb7c0632872092734b9afd13f15: Status 404 returned error can't find the container with id d514f6d518512a891698b8341ee8a894434c8cb7c0632872092734b9afd13f15 Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.854983 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-fmqbg"] Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.857218 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-fmqbg" Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.865521 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-fmqbg"] Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.870640 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/330ce20e-583d-4d35-86e3-be01b80a00e4-operator-scripts\") pod \"aodh-db-create-fmqbg\" (UID: \"330ce20e-583d-4d35-86e3-be01b80a00e4\") " pod="openstack/aodh-db-create-fmqbg" Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.870731 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jmcx\" (UniqueName: \"kubernetes.io/projected/330ce20e-583d-4d35-86e3-be01b80a00e4-kube-api-access-6jmcx\") pod \"aodh-db-create-fmqbg\" (UID: \"330ce20e-583d-4d35-86e3-be01b80a00e4\") " pod="openstack/aodh-db-create-fmqbg" Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.932326 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"18b7922d-f0bb-4b3c-90ff-c7456e477222","Type":"ContainerStarted","Data":"6cbfd4c076113c645e122fce565f8e49a906726dd90e298769f123f97d58a5e3"} Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.932367 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"18b7922d-f0bb-4b3c-90ff-c7456e477222","Type":"ContainerStarted","Data":"d514f6d518512a891698b8341ee8a894434c8cb7c0632872092734b9afd13f15"} Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.957826 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-500b-account-create-dczmz"] Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.959296 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-500b-account-create-dczmz" Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.962739 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.962717809 podStartE2EDuration="1.962717809s" podCreationTimestamp="2025-11-25 15:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:44:44.955003928 +0000 UTC m=+3043.702138358" watchObservedRunningTime="2025-11-25 15:44:44.962717809 +0000 UTC m=+3043.709852239" Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.963647 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.974969 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5tn5\" (UniqueName: \"kubernetes.io/projected/ed9de000-2e16-4a0d-9c13-3a7e206d6315-kube-api-access-b5tn5\") pod \"aodh-500b-account-create-dczmz\" (UID: \"ed9de000-2e16-4a0d-9c13-3a7e206d6315\") " pod="openstack/aodh-500b-account-create-dczmz" Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.975076 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jmcx\" (UniqueName: \"kubernetes.io/projected/330ce20e-583d-4d35-86e3-be01b80a00e4-kube-api-access-6jmcx\") pod \"aodh-db-create-fmqbg\" (UID: \"330ce20e-583d-4d35-86e3-be01b80a00e4\") " pod="openstack/aodh-db-create-fmqbg" Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.975452 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/330ce20e-583d-4d35-86e3-be01b80a00e4-operator-scripts\") pod \"aodh-db-create-fmqbg\" (UID: \"330ce20e-583d-4d35-86e3-be01b80a00e4\") " pod="openstack/aodh-db-create-fmqbg" Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.975492 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed9de000-2e16-4a0d-9c13-3a7e206d6315-operator-scripts\") pod \"aodh-500b-account-create-dczmz\" (UID: \"ed9de000-2e16-4a0d-9c13-3a7e206d6315\") " pod="openstack/aodh-500b-account-create-dczmz" Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.978347 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/330ce20e-583d-4d35-86e3-be01b80a00e4-operator-scripts\") pod \"aodh-db-create-fmqbg\" (UID: \"330ce20e-583d-4d35-86e3-be01b80a00e4\") " pod="openstack/aodh-db-create-fmqbg" Nov 25 15:44:44 crc kubenswrapper[4731]: I1125 15:44:44.996349 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jmcx\" (UniqueName: \"kubernetes.io/projected/330ce20e-583d-4d35-86e3-be01b80a00e4-kube-api-access-6jmcx\") pod \"aodh-db-create-fmqbg\" (UID: \"330ce20e-583d-4d35-86e3-be01b80a00e4\") " pod="openstack/aodh-db-create-fmqbg" Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.002401 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-500b-account-create-dczmz"] Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.077962 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed9de000-2e16-4a0d-9c13-3a7e206d6315-operator-scripts\") pod \"aodh-500b-account-create-dczmz\" (UID: \"ed9de000-2e16-4a0d-9c13-3a7e206d6315\") " pod="openstack/aodh-500b-account-create-dczmz" Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.078102 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5tn5\" (UniqueName: \"kubernetes.io/projected/ed9de000-2e16-4a0d-9c13-3a7e206d6315-kube-api-access-b5tn5\") pod \"aodh-500b-account-create-dczmz\" (UID: \"ed9de000-2e16-4a0d-9c13-3a7e206d6315\") " pod="openstack/aodh-500b-account-create-dczmz" Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.079608 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed9de000-2e16-4a0d-9c13-3a7e206d6315-operator-scripts\") pod \"aodh-500b-account-create-dczmz\" (UID: \"ed9de000-2e16-4a0d-9c13-3a7e206d6315\") " pod="openstack/aodh-500b-account-create-dczmz" Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.095294 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5tn5\" (UniqueName: \"kubernetes.io/projected/ed9de000-2e16-4a0d-9c13-3a7e206d6315-kube-api-access-b5tn5\") pod \"aodh-500b-account-create-dczmz\" (UID: \"ed9de000-2e16-4a0d-9c13-3a7e206d6315\") " pod="openstack/aodh-500b-account-create-dczmz" Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.185692 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-fmqbg" Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.367532 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-500b-account-create-dczmz" Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.668380 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-fmqbg"] Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.831671 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-500b-account-create-dczmz"] Nov 25 15:44:45 crc kubenswrapper[4731]: W1125 15:44:45.848538 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded9de000_2e16_4a0d_9c13_3a7e206d6315.slice/crio-6350fdbb53905106923615dd70f0dc8260e778c534c8544989419ec344656a71 WatchSource:0}: Error finding container 6350fdbb53905106923615dd70f0dc8260e778c534c8544989419ec344656a71: Status 404 returned error can't find the container with id 6350fdbb53905106923615dd70f0dc8260e778c534c8544989419ec344656a71 Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.956531 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.961298 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-500b-account-create-dczmz" event={"ID":"ed9de000-2e16-4a0d-9c13-3a7e206d6315","Type":"ContainerStarted","Data":"6350fdbb53905106923615dd70f0dc8260e778c534c8544989419ec344656a71"} Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.962790 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-fmqbg" event={"ID":"330ce20e-583d-4d35-86e3-be01b80a00e4","Type":"ContainerStarted","Data":"1578519684ce643745dafa27f50236734165be45ab0c2fce01f964330980712f"} Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.962839 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-fmqbg" event={"ID":"330ce20e-583d-4d35-86e3-be01b80a00e4","Type":"ContainerStarted","Data":"3598ee2de1cf215d98d1981a49f104892766531b8799276148cf3c3c9212adc3"} Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.965543 4731 generic.go:334] "Generic (PLEG): container finished" podID="8815eebf-c8f8-49f9-94fb-5c3c454514d5" containerID="73080d29582b58768d867598efda78dd573b99921dc2f053d056c0221ef69cf0" exitCode=137 Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.966080 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.966263 4731 scope.go:117] "RemoveContainer" containerID="73080d29582b58768d867598efda78dd573b99921dc2f053d056c0221ef69cf0" Nov 25 15:44:45 crc kubenswrapper[4731]: I1125 15:44:45.989515 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-fmqbg" podStartSLOduration=1.989495271 podStartE2EDuration="1.989495271s" podCreationTimestamp="2025-11-25 15:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:44:45.979289837 +0000 UTC m=+3044.726424267" watchObservedRunningTime="2025-11-25 15:44:45.989495271 +0000 UTC m=+3044.736629701" Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.010127 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8815eebf-c8f8-49f9-94fb-5c3c454514d5-openstack-config\") pod \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\" (UID: \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\") " Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.010333 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wszdz\" (UniqueName: \"kubernetes.io/projected/8815eebf-c8f8-49f9-94fb-5c3c454514d5-kube-api-access-wszdz\") pod \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\" (UID: \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\") " Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.010403 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8815eebf-c8f8-49f9-94fb-5c3c454514d5-openstack-config-secret\") pod \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\" (UID: \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\") " Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.010494 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8815eebf-c8f8-49f9-94fb-5c3c454514d5-combined-ca-bundle\") pod \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\" (UID: \"8815eebf-c8f8-49f9-94fb-5c3c454514d5\") " Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.017523 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8815eebf-c8f8-49f9-94fb-5c3c454514d5-kube-api-access-wszdz" (OuterVolumeSpecName: "kube-api-access-wszdz") pod "8815eebf-c8f8-49f9-94fb-5c3c454514d5" (UID: "8815eebf-c8f8-49f9-94fb-5c3c454514d5"). InnerVolumeSpecName "kube-api-access-wszdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.047871 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8815eebf-c8f8-49f9-94fb-5c3c454514d5-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8815eebf-c8f8-49f9-94fb-5c3c454514d5" (UID: "8815eebf-c8f8-49f9-94fb-5c3c454514d5"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.054141 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8815eebf-c8f8-49f9-94fb-5c3c454514d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8815eebf-c8f8-49f9-94fb-5c3c454514d5" (UID: "8815eebf-c8f8-49f9-94fb-5c3c454514d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.070689 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8815eebf-c8f8-49f9-94fb-5c3c454514d5-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8815eebf-c8f8-49f9-94fb-5c3c454514d5" (UID: "8815eebf-c8f8-49f9-94fb-5c3c454514d5"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.113414 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wszdz\" (UniqueName: \"kubernetes.io/projected/8815eebf-c8f8-49f9-94fb-5c3c454514d5-kube-api-access-wszdz\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.113444 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8815eebf-c8f8-49f9-94fb-5c3c454514d5-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.113455 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8815eebf-c8f8-49f9-94fb-5c3c454514d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.113464 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8815eebf-c8f8-49f9-94fb-5c3c454514d5-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.197902 4731 scope.go:117] "RemoveContainer" containerID="73080d29582b58768d867598efda78dd573b99921dc2f053d056c0221ef69cf0" Nov 25 15:44:46 crc kubenswrapper[4731]: E1125 15:44:46.198863 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73080d29582b58768d867598efda78dd573b99921dc2f053d056c0221ef69cf0\": container with ID starting with 73080d29582b58768d867598efda78dd573b99921dc2f053d056c0221ef69cf0 not found: ID does not exist" containerID="73080d29582b58768d867598efda78dd573b99921dc2f053d056c0221ef69cf0" Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.198904 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73080d29582b58768d867598efda78dd573b99921dc2f053d056c0221ef69cf0"} err="failed to get container status \"73080d29582b58768d867598efda78dd573b99921dc2f053d056c0221ef69cf0\": rpc error: code = NotFound desc = could not find container \"73080d29582b58768d867598efda78dd573b99921dc2f053d056c0221ef69cf0\": container with ID starting with 73080d29582b58768d867598efda78dd573b99921dc2f053d056c0221ef69cf0 not found: ID does not exist" Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.285332 4731 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8815eebf-c8f8-49f9-94fb-5c3c454514d5" podUID="18b7922d-f0bb-4b3c-90ff-c7456e477222" Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.822976 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8815eebf-c8f8-49f9-94fb-5c3c454514d5" path="/var/lib/kubelet/pods/8815eebf-c8f8-49f9-94fb-5c3c454514d5/volumes" Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.977431 4731 generic.go:334] "Generic (PLEG): container finished" podID="ed9de000-2e16-4a0d-9c13-3a7e206d6315" containerID="08ef53bad59317d71001bc280a2ffbbae3ee012001a91fba50371676257bd4e5" exitCode=0 Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.977505 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-500b-account-create-dczmz" event={"ID":"ed9de000-2e16-4a0d-9c13-3a7e206d6315","Type":"ContainerDied","Data":"08ef53bad59317d71001bc280a2ffbbae3ee012001a91fba50371676257bd4e5"} Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.979426 4731 generic.go:334] "Generic (PLEG): container finished" podID="330ce20e-583d-4d35-86e3-be01b80a00e4" containerID="1578519684ce643745dafa27f50236734165be45ab0c2fce01f964330980712f" exitCode=0 Nov 25 15:44:46 crc kubenswrapper[4731]: I1125 15:44:46.979512 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-fmqbg" event={"ID":"330ce20e-583d-4d35-86e3-be01b80a00e4","Type":"ContainerDied","Data":"1578519684ce643745dafa27f50236734165be45ab0c2fce01f964330980712f"} Nov 25 15:44:48 crc kubenswrapper[4731]: I1125 15:44:48.402193 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-fmqbg" Nov 25 15:44:48 crc kubenswrapper[4731]: I1125 15:44:48.409572 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-500b-account-create-dczmz" Nov 25 15:44:48 crc kubenswrapper[4731]: I1125 15:44:48.496640 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed9de000-2e16-4a0d-9c13-3a7e206d6315-operator-scripts\") pod \"ed9de000-2e16-4a0d-9c13-3a7e206d6315\" (UID: \"ed9de000-2e16-4a0d-9c13-3a7e206d6315\") " Nov 25 15:44:48 crc kubenswrapper[4731]: I1125 15:44:48.496806 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/330ce20e-583d-4d35-86e3-be01b80a00e4-operator-scripts\") pod \"330ce20e-583d-4d35-86e3-be01b80a00e4\" (UID: \"330ce20e-583d-4d35-86e3-be01b80a00e4\") " Nov 25 15:44:48 crc kubenswrapper[4731]: I1125 15:44:48.496887 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jmcx\" (UniqueName: \"kubernetes.io/projected/330ce20e-583d-4d35-86e3-be01b80a00e4-kube-api-access-6jmcx\") pod \"330ce20e-583d-4d35-86e3-be01b80a00e4\" (UID: \"330ce20e-583d-4d35-86e3-be01b80a00e4\") " Nov 25 15:44:48 crc kubenswrapper[4731]: I1125 15:44:48.496921 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5tn5\" (UniqueName: \"kubernetes.io/projected/ed9de000-2e16-4a0d-9c13-3a7e206d6315-kube-api-access-b5tn5\") pod \"ed9de000-2e16-4a0d-9c13-3a7e206d6315\" (UID: \"ed9de000-2e16-4a0d-9c13-3a7e206d6315\") " Nov 25 15:44:48 crc kubenswrapper[4731]: I1125 15:44:48.497518 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed9de000-2e16-4a0d-9c13-3a7e206d6315-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ed9de000-2e16-4a0d-9c13-3a7e206d6315" (UID: "ed9de000-2e16-4a0d-9c13-3a7e206d6315"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:44:48 crc kubenswrapper[4731]: I1125 15:44:48.497566 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/330ce20e-583d-4d35-86e3-be01b80a00e4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "330ce20e-583d-4d35-86e3-be01b80a00e4" (UID: "330ce20e-583d-4d35-86e3-be01b80a00e4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:44:48 crc kubenswrapper[4731]: I1125 15:44:48.502482 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/330ce20e-583d-4d35-86e3-be01b80a00e4-kube-api-access-6jmcx" (OuterVolumeSpecName: "kube-api-access-6jmcx") pod "330ce20e-583d-4d35-86e3-be01b80a00e4" (UID: "330ce20e-583d-4d35-86e3-be01b80a00e4"). InnerVolumeSpecName "kube-api-access-6jmcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:44:48 crc kubenswrapper[4731]: I1125 15:44:48.502559 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed9de000-2e16-4a0d-9c13-3a7e206d6315-kube-api-access-b5tn5" (OuterVolumeSpecName: "kube-api-access-b5tn5") pod "ed9de000-2e16-4a0d-9c13-3a7e206d6315" (UID: "ed9de000-2e16-4a0d-9c13-3a7e206d6315"). InnerVolumeSpecName "kube-api-access-b5tn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:44:48 crc kubenswrapper[4731]: I1125 15:44:48.599258 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/330ce20e-583d-4d35-86e3-be01b80a00e4-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:48 crc kubenswrapper[4731]: I1125 15:44:48.599327 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jmcx\" (UniqueName: \"kubernetes.io/projected/330ce20e-583d-4d35-86e3-be01b80a00e4-kube-api-access-6jmcx\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:48 crc kubenswrapper[4731]: I1125 15:44:48.599339 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5tn5\" (UniqueName: \"kubernetes.io/projected/ed9de000-2e16-4a0d-9c13-3a7e206d6315-kube-api-access-b5tn5\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:48 crc kubenswrapper[4731]: I1125 15:44:48.599349 4731 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ed9de000-2e16-4a0d-9c13-3a7e206d6315-operator-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:49 crc kubenswrapper[4731]: I1125 15:44:49.027006 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-500b-account-create-dczmz" event={"ID":"ed9de000-2e16-4a0d-9c13-3a7e206d6315","Type":"ContainerDied","Data":"6350fdbb53905106923615dd70f0dc8260e778c534c8544989419ec344656a71"} Nov 25 15:44:49 crc kubenswrapper[4731]: I1125 15:44:49.027061 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6350fdbb53905106923615dd70f0dc8260e778c534c8544989419ec344656a71" Nov 25 15:44:49 crc kubenswrapper[4731]: I1125 15:44:49.027060 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-500b-account-create-dczmz" Nov 25 15:44:49 crc kubenswrapper[4731]: I1125 15:44:49.030652 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-fmqbg" event={"ID":"330ce20e-583d-4d35-86e3-be01b80a00e4","Type":"ContainerDied","Data":"3598ee2de1cf215d98d1981a49f104892766531b8799276148cf3c3c9212adc3"} Nov 25 15:44:49 crc kubenswrapper[4731]: I1125 15:44:49.030697 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3598ee2de1cf215d98d1981a49f104892766531b8799276148cf3c3c9212adc3" Nov 25 15:44:49 crc kubenswrapper[4731]: I1125 15:44:49.030889 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-fmqbg" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.216079 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-mmdqv"] Nov 25 15:44:50 crc kubenswrapper[4731]: E1125 15:44:50.216781 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed9de000-2e16-4a0d-9c13-3a7e206d6315" containerName="mariadb-account-create" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.216798 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed9de000-2e16-4a0d-9c13-3a7e206d6315" containerName="mariadb-account-create" Nov 25 15:44:50 crc kubenswrapper[4731]: E1125 15:44:50.216881 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="330ce20e-583d-4d35-86e3-be01b80a00e4" containerName="mariadb-database-create" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.217038 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="330ce20e-583d-4d35-86e3-be01b80a00e4" containerName="mariadb-database-create" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.217268 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="330ce20e-583d-4d35-86e3-be01b80a00e4" containerName="mariadb-database-create" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.217295 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed9de000-2e16-4a0d-9c13-3a7e206d6315" containerName="mariadb-account-create" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.218105 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-mmdqv" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.221322 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.221371 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-h9h4w" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.222443 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.222653 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.232925 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-mmdqv"] Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.336542 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-scripts\") pod \"aodh-db-sync-mmdqv\" (UID: \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\") " pod="openstack/aodh-db-sync-mmdqv" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.336639 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-config-data\") pod \"aodh-db-sync-mmdqv\" (UID: \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\") " pod="openstack/aodh-db-sync-mmdqv" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.336842 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfcwv\" (UniqueName: \"kubernetes.io/projected/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-kube-api-access-kfcwv\") pod \"aodh-db-sync-mmdqv\" (UID: \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\") " pod="openstack/aodh-db-sync-mmdqv" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.336916 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-combined-ca-bundle\") pod \"aodh-db-sync-mmdqv\" (UID: \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\") " pod="openstack/aodh-db-sync-mmdqv" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.439544 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-combined-ca-bundle\") pod \"aodh-db-sync-mmdqv\" (UID: \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\") " pod="openstack/aodh-db-sync-mmdqv" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.439619 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-scripts\") pod \"aodh-db-sync-mmdqv\" (UID: \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\") " pod="openstack/aodh-db-sync-mmdqv" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.439686 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-config-data\") pod \"aodh-db-sync-mmdqv\" (UID: \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\") " pod="openstack/aodh-db-sync-mmdqv" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.440348 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfcwv\" (UniqueName: \"kubernetes.io/projected/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-kube-api-access-kfcwv\") pod \"aodh-db-sync-mmdqv\" (UID: \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\") " pod="openstack/aodh-db-sync-mmdqv" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.443894 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-scripts\") pod \"aodh-db-sync-mmdqv\" (UID: \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\") " pod="openstack/aodh-db-sync-mmdqv" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.450672 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-config-data\") pod \"aodh-db-sync-mmdqv\" (UID: \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\") " pod="openstack/aodh-db-sync-mmdqv" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.452374 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-combined-ca-bundle\") pod \"aodh-db-sync-mmdqv\" (UID: \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\") " pod="openstack/aodh-db-sync-mmdqv" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.473488 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfcwv\" (UniqueName: \"kubernetes.io/projected/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-kube-api-access-kfcwv\") pod \"aodh-db-sync-mmdqv\" (UID: \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\") " pod="openstack/aodh-db-sync-mmdqv" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.536588 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-mmdqv" Nov 25 15:44:50 crc kubenswrapper[4731]: I1125 15:44:50.973987 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-mmdqv"] Nov 25 15:44:51 crc kubenswrapper[4731]: I1125 15:44:51.050031 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-mmdqv" event={"ID":"0eb3c859-484c-41c6-b15f-9f565a6a9eb6","Type":"ContainerStarted","Data":"801e971c7f92e033daec41e88485318707f20a33dd572cbb5d0a126765c2f2ce"} Nov 25 15:44:55 crc kubenswrapper[4731]: I1125 15:44:55.088255 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-mmdqv" event={"ID":"0eb3c859-484c-41c6-b15f-9f565a6a9eb6","Type":"ContainerStarted","Data":"ed9d82f220e8a9c365b10e90e1044b3aaf3afc8dc84b1eec0f659969d9f4eafd"} Nov 25 15:44:55 crc kubenswrapper[4731]: I1125 15:44:55.110091 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-mmdqv" podStartSLOduration=1.428564057 podStartE2EDuration="5.110070597s" podCreationTimestamp="2025-11-25 15:44:50 +0000 UTC" firstStartedPulling="2025-11-25 15:44:50.975774407 +0000 UTC m=+3049.722908837" lastFinishedPulling="2025-11-25 15:44:54.657280947 +0000 UTC m=+3053.404415377" observedRunningTime="2025-11-25 15:44:55.101314879 +0000 UTC m=+3053.848449309" watchObservedRunningTime="2025-11-25 15:44:55.110070597 +0000 UTC m=+3053.857205017" Nov 25 15:44:57 crc kubenswrapper[4731]: I1125 15:44:57.108862 4731 generic.go:334] "Generic (PLEG): container finished" podID="0eb3c859-484c-41c6-b15f-9f565a6a9eb6" containerID="ed9d82f220e8a9c365b10e90e1044b3aaf3afc8dc84b1eec0f659969d9f4eafd" exitCode=0 Nov 25 15:44:57 crc kubenswrapper[4731]: I1125 15:44:57.108959 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-mmdqv" event={"ID":"0eb3c859-484c-41c6-b15f-9f565a6a9eb6","Type":"ContainerDied","Data":"ed9d82f220e8a9c365b10e90e1044b3aaf3afc8dc84b1eec0f659969d9f4eafd"} Nov 25 15:44:58 crc kubenswrapper[4731]: I1125 15:44:58.552772 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-mmdqv" Nov 25 15:44:58 crc kubenswrapper[4731]: I1125 15:44:58.598967 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-config-data\") pod \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\" (UID: \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\") " Nov 25 15:44:58 crc kubenswrapper[4731]: I1125 15:44:58.599088 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-combined-ca-bundle\") pod \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\" (UID: \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\") " Nov 25 15:44:58 crc kubenswrapper[4731]: I1125 15:44:58.599143 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfcwv\" (UniqueName: \"kubernetes.io/projected/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-kube-api-access-kfcwv\") pod \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\" (UID: \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\") " Nov 25 15:44:58 crc kubenswrapper[4731]: I1125 15:44:58.599190 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-scripts\") pod \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\" (UID: \"0eb3c859-484c-41c6-b15f-9f565a6a9eb6\") " Nov 25 15:44:58 crc kubenswrapper[4731]: I1125 15:44:58.605989 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-kube-api-access-kfcwv" (OuterVolumeSpecName: "kube-api-access-kfcwv") pod "0eb3c859-484c-41c6-b15f-9f565a6a9eb6" (UID: "0eb3c859-484c-41c6-b15f-9f565a6a9eb6"). InnerVolumeSpecName "kube-api-access-kfcwv". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:44:58 crc kubenswrapper[4731]: I1125 15:44:58.607067 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-scripts" (OuterVolumeSpecName: "scripts") pod "0eb3c859-484c-41c6-b15f-9f565a6a9eb6" (UID: "0eb3c859-484c-41c6-b15f-9f565a6a9eb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:44:58 crc kubenswrapper[4731]: I1125 15:44:58.633448 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-config-data" (OuterVolumeSpecName: "config-data") pod "0eb3c859-484c-41c6-b15f-9f565a6a9eb6" (UID: "0eb3c859-484c-41c6-b15f-9f565a6a9eb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:44:58 crc kubenswrapper[4731]: I1125 15:44:58.635718 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0eb3c859-484c-41c6-b15f-9f565a6a9eb6" (UID: "0eb3c859-484c-41c6-b15f-9f565a6a9eb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:44:58 crc kubenswrapper[4731]: I1125 15:44:58.702402 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:58 crc kubenswrapper[4731]: I1125 15:44:58.702743 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:58 crc kubenswrapper[4731]: I1125 15:44:58.702860 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfcwv\" (UniqueName: \"kubernetes.io/projected/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-kube-api-access-kfcwv\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:58 crc kubenswrapper[4731]: I1125 15:44:58.702925 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0eb3c859-484c-41c6-b15f-9f565a6a9eb6-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:44:58 crc kubenswrapper[4731]: I1125 15:44:58.812435 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:44:58 crc kubenswrapper[4731]: E1125 15:44:58.812917 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:44:59 crc kubenswrapper[4731]: I1125 15:44:59.130014 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-mmdqv" event={"ID":"0eb3c859-484c-41c6-b15f-9f565a6a9eb6","Type":"ContainerDied","Data":"801e971c7f92e033daec41e88485318707f20a33dd572cbb5d0a126765c2f2ce"} Nov 25 15:44:59 crc kubenswrapper[4731]: I1125 15:44:59.130093 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="801e971c7f92e033daec41e88485318707f20a33dd572cbb5d0a126765c2f2ce" Nov 25 15:44:59 crc kubenswrapper[4731]: I1125 15:44:59.130100 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-mmdqv" Nov 25 15:44:59 crc kubenswrapper[4731]: I1125 15:44:59.909610 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 25 15:44:59 crc kubenswrapper[4731]: E1125 15:44:59.910089 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eb3c859-484c-41c6-b15f-9f565a6a9eb6" containerName="aodh-db-sync" Nov 25 15:44:59 crc kubenswrapper[4731]: I1125 15:44:59.910103 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eb3c859-484c-41c6-b15f-9f565a6a9eb6" containerName="aodh-db-sync" Nov 25 15:44:59 crc kubenswrapper[4731]: I1125 15:44:59.910391 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eb3c859-484c-41c6-b15f-9f565a6a9eb6" containerName="aodh-db-sync" Nov 25 15:44:59 crc kubenswrapper[4731]: I1125 15:44:59.913048 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:44:59 crc kubenswrapper[4731]: I1125 15:44:59.923552 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 25 15:44:59 crc kubenswrapper[4731]: I1125 15:44:59.923641 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 25 15:44:59 crc kubenswrapper[4731]: I1125 15:44:59.923552 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-h9h4w" Nov 25 15:44:59 crc kubenswrapper[4731]: I1125 15:44:59.952589 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.030587 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\") " pod="openstack/aodh-0" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.030637 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-config-data\") pod \"aodh-0\" (UID: \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\") " pod="openstack/aodh-0" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.030840 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-scripts\") pod \"aodh-0\" (UID: \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\") " pod="openstack/aodh-0" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.030896 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfc5d\" (UniqueName: \"kubernetes.io/projected/9b0af9b9-3996-4d68-b7b4-9f83fd904609-kube-api-access-wfc5d\") pod \"aodh-0\" (UID: \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\") " pod="openstack/aodh-0" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.132462 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-scripts\") pod \"aodh-0\" (UID: \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\") " pod="openstack/aodh-0" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.132553 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfc5d\" (UniqueName: \"kubernetes.io/projected/9b0af9b9-3996-4d68-b7b4-9f83fd904609-kube-api-access-wfc5d\") pod \"aodh-0\" (UID: \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\") " pod="openstack/aodh-0" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.132613 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\") " pod="openstack/aodh-0" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.132644 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-config-data\") pod \"aodh-0\" (UID: \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\") " pod="openstack/aodh-0" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.144838 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp"] Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.146459 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.150170 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.150445 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.150639 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-scripts\") pod \"aodh-0\" (UID: \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\") " pod="openstack/aodh-0" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.151232 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\") " pod="openstack/aodh-0" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.151627 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-config-data\") pod \"aodh-0\" (UID: \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\") " pod="openstack/aodh-0" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.159690 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfc5d\" (UniqueName: \"kubernetes.io/projected/9b0af9b9-3996-4d68-b7b4-9f83fd904609-kube-api-access-wfc5d\") pod \"aodh-0\" (UID: \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\") " pod="openstack/aodh-0" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.176311 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp"] Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.234926 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4945ca15-2ad5-4d61-af56-eed68ff2968d-config-volume\") pod \"collect-profiles-29401425-c4xkp\" (UID: \"4945ca15-2ad5-4d61-af56-eed68ff2968d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.235075 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4945ca15-2ad5-4d61-af56-eed68ff2968d-secret-volume\") pod \"collect-profiles-29401425-c4xkp\" (UID: \"4945ca15-2ad5-4d61-af56-eed68ff2968d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.235145 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5c57x\" (UniqueName: \"kubernetes.io/projected/4945ca15-2ad5-4d61-af56-eed68ff2968d-kube-api-access-5c57x\") pod \"collect-profiles-29401425-c4xkp\" (UID: \"4945ca15-2ad5-4d61-af56-eed68ff2968d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.251419 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.337723 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4945ca15-2ad5-4d61-af56-eed68ff2968d-secret-volume\") pod \"collect-profiles-29401425-c4xkp\" (UID: \"4945ca15-2ad5-4d61-af56-eed68ff2968d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.337908 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5c57x\" (UniqueName: \"kubernetes.io/projected/4945ca15-2ad5-4d61-af56-eed68ff2968d-kube-api-access-5c57x\") pod \"collect-profiles-29401425-c4xkp\" (UID: \"4945ca15-2ad5-4d61-af56-eed68ff2968d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.338100 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4945ca15-2ad5-4d61-af56-eed68ff2968d-config-volume\") pod \"collect-profiles-29401425-c4xkp\" (UID: \"4945ca15-2ad5-4d61-af56-eed68ff2968d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.339605 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4945ca15-2ad5-4d61-af56-eed68ff2968d-config-volume\") pod \"collect-profiles-29401425-c4xkp\" (UID: \"4945ca15-2ad5-4d61-af56-eed68ff2968d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.344370 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4945ca15-2ad5-4d61-af56-eed68ff2968d-secret-volume\") pod \"collect-profiles-29401425-c4xkp\" (UID: \"4945ca15-2ad5-4d61-af56-eed68ff2968d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.359059 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5c57x\" (UniqueName: \"kubernetes.io/projected/4945ca15-2ad5-4d61-af56-eed68ff2968d-kube-api-access-5c57x\") pod \"collect-profiles-29401425-c4xkp\" (UID: \"4945ca15-2ad5-4d61-af56-eed68ff2968d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.559395 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp" Nov 25 15:45:00 crc kubenswrapper[4731]: I1125 15:45:00.885113 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 15:45:01 crc kubenswrapper[4731]: W1125 15:45:01.055419 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4945ca15_2ad5_4d61_af56_eed68ff2968d.slice/crio-33224e7618048d1ab7883ef90f0fcb2d4b817c6154a77e748438e197316e4595 WatchSource:0}: Error finding container 33224e7618048d1ab7883ef90f0fcb2d4b817c6154a77e748438e197316e4595: Status 404 returned error can't find the container with id 33224e7618048d1ab7883ef90f0fcb2d4b817c6154a77e748438e197316e4595 Nov 25 15:45:01 crc kubenswrapper[4731]: I1125 15:45:01.058101 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp"] Nov 25 15:45:01 crc kubenswrapper[4731]: I1125 15:45:01.169374 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9b0af9b9-3996-4d68-b7b4-9f83fd904609","Type":"ContainerStarted","Data":"a5781586d27049370c0a6737f33896254481d7bf4e6ff6dfc5bce680838c97ea"} Nov 25 15:45:01 crc kubenswrapper[4731]: I1125 15:45:01.174476 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp" event={"ID":"4945ca15-2ad5-4d61-af56-eed68ff2968d","Type":"ContainerStarted","Data":"33224e7618048d1ab7883ef90f0fcb2d4b817c6154a77e748438e197316e4595"} Nov 25 15:45:02 crc kubenswrapper[4731]: I1125 15:45:02.187695 4731 generic.go:334] "Generic (PLEG): container finished" podID="4945ca15-2ad5-4d61-af56-eed68ff2968d" containerID="eb5377e6c1e80d918cc81f188b4402b460c952522a94131b84a2ac51f15fe7df" exitCode=0 Nov 25 15:45:02 crc kubenswrapper[4731]: I1125 15:45:02.187830 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp" event={"ID":"4945ca15-2ad5-4d61-af56-eed68ff2968d","Type":"ContainerDied","Data":"eb5377e6c1e80d918cc81f188b4402b460c952522a94131b84a2ac51f15fe7df"} Nov 25 15:45:02 crc kubenswrapper[4731]: I1125 15:45:02.190735 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9b0af9b9-3996-4d68-b7b4-9f83fd904609","Type":"ContainerStarted","Data":"d5f337a21657d4370fedfab922e745c5360c916a020f8d117b164a6feb9f40c2"} Nov 25 15:45:02 crc kubenswrapper[4731]: I1125 15:45:02.661199 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:45:02 crc kubenswrapper[4731]: I1125 15:45:02.661532 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="ceilometer-central-agent" containerID="cri-o://116eee04434674afaa80d1b1141b9b3893e1d5365f2a388274bf125e71abb467" gracePeriod=30 Nov 25 15:45:02 crc kubenswrapper[4731]: I1125 15:45:02.661621 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="ceilometer-notification-agent" containerID="cri-o://9cc31ee5b7243ca3b583960f54fc72cf8286e6bd09e22610cb6dee8052aac481" gracePeriod=30 Nov 25 15:45:02 crc kubenswrapper[4731]: I1125 15:45:02.661614 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="sg-core" containerID="cri-o://cda2b7b7808982818578d0015a4b810d320e8292e7d98cab1d376d65dd3b257d" gracePeriod=30 Nov 25 15:45:02 crc kubenswrapper[4731]: I1125 15:45:02.661887 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="proxy-httpd" containerID="cri-o://ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656" gracePeriod=30 Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.205213 4731 generic.go:334] "Generic (PLEG): container finished" podID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerID="ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656" exitCode=0 Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.205833 4731 generic.go:334] "Generic (PLEG): container finished" podID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerID="cda2b7b7808982818578d0015a4b810d320e8292e7d98cab1d376d65dd3b257d" exitCode=2 Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.205856 4731 generic.go:334] "Generic (PLEG): container finished" podID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerID="116eee04434674afaa80d1b1141b9b3893e1d5365f2a388274bf125e71abb467" exitCode=0 Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.205853 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01072c78-f902-48e8-9edc-fe70ff7e4cd6","Type":"ContainerDied","Data":"ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656"} Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.206017 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01072c78-f902-48e8-9edc-fe70ff7e4cd6","Type":"ContainerDied","Data":"cda2b7b7808982818578d0015a4b810d320e8292e7d98cab1d376d65dd3b257d"} Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.206039 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01072c78-f902-48e8-9edc-fe70ff7e4cd6","Type":"ContainerDied","Data":"116eee04434674afaa80d1b1141b9b3893e1d5365f2a388274bf125e71abb467"} Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.458657 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.555398 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp" Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.644492 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4945ca15-2ad5-4d61-af56-eed68ff2968d-config-volume\") pod \"4945ca15-2ad5-4d61-af56-eed68ff2968d\" (UID: \"4945ca15-2ad5-4d61-af56-eed68ff2968d\") " Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.644617 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4945ca15-2ad5-4d61-af56-eed68ff2968d-secret-volume\") pod \"4945ca15-2ad5-4d61-af56-eed68ff2968d\" (UID: \"4945ca15-2ad5-4d61-af56-eed68ff2968d\") " Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.644737 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5c57x\" (UniqueName: \"kubernetes.io/projected/4945ca15-2ad5-4d61-af56-eed68ff2968d-kube-api-access-5c57x\") pod \"4945ca15-2ad5-4d61-af56-eed68ff2968d\" (UID: \"4945ca15-2ad5-4d61-af56-eed68ff2968d\") " Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.645326 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4945ca15-2ad5-4d61-af56-eed68ff2968d-config-volume" (OuterVolumeSpecName: "config-volume") pod "4945ca15-2ad5-4d61-af56-eed68ff2968d" (UID: "4945ca15-2ad5-4d61-af56-eed68ff2968d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.650343 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4945ca15-2ad5-4d61-af56-eed68ff2968d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4945ca15-2ad5-4d61-af56-eed68ff2968d" (UID: "4945ca15-2ad5-4d61-af56-eed68ff2968d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.650916 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4945ca15-2ad5-4d61-af56-eed68ff2968d-kube-api-access-5c57x" (OuterVolumeSpecName: "kube-api-access-5c57x") pod "4945ca15-2ad5-4d61-af56-eed68ff2968d" (UID: "4945ca15-2ad5-4d61-af56-eed68ff2968d"). InnerVolumeSpecName "kube-api-access-5c57x". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.747097 4731 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4945ca15-2ad5-4d61-af56-eed68ff2968d-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.747245 4731 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4945ca15-2ad5-4d61-af56-eed68ff2968d-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:03 crc kubenswrapper[4731]: I1125 15:45:03.747256 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5c57x\" (UniqueName: \"kubernetes.io/projected/4945ca15-2ad5-4d61-af56-eed68ff2968d-kube-api-access-5c57x\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:04 crc kubenswrapper[4731]: I1125 15:45:04.222499 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp" event={"ID":"4945ca15-2ad5-4d61-af56-eed68ff2968d","Type":"ContainerDied","Data":"33224e7618048d1ab7883ef90f0fcb2d4b817c6154a77e748438e197316e4595"} Nov 25 15:45:04 crc kubenswrapper[4731]: I1125 15:45:04.222529 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401425-c4xkp" Nov 25 15:45:04 crc kubenswrapper[4731]: I1125 15:45:04.222545 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33224e7618048d1ab7883ef90f0fcb2d4b817c6154a77e748438e197316e4595" Nov 25 15:45:04 crc kubenswrapper[4731]: I1125 15:45:04.225291 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9b0af9b9-3996-4d68-b7b4-9f83fd904609","Type":"ContainerStarted","Data":"52a8e3d91483997a93a45ad19098ecaf4bc161f764c95bdbf2122aa97863537c"} Nov 25 15:45:04 crc kubenswrapper[4731]: I1125 15:45:04.472143 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.209:3000/\": dial tcp 10.217.0.209:3000: connect: connection refused" Nov 25 15:45:04 crc kubenswrapper[4731]: I1125 15:45:04.632668 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr"] Nov 25 15:45:04 crc kubenswrapper[4731]: I1125 15:45:04.643458 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401380-x7gvr"] Nov 25 15:45:04 crc kubenswrapper[4731]: I1125 15:45:04.822479 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9970e656-62f1-49c3-a6b6-085caa1b9860" path="/var/lib/kubelet/pods/9970e656-62f1-49c3-a6b6-085caa1b9860/volumes" Nov 25 15:45:05 crc kubenswrapper[4731]: I1125 15:45:05.234501 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9b0af9b9-3996-4d68-b7b4-9f83fd904609","Type":"ContainerStarted","Data":"068a73a86d26dbde95ae9fdec4df7da4d00803353bd1eba92d297120aaabbc6d"} Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.301890 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.389607 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-combined-ca-bundle\") pod \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.390380 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01072c78-f902-48e8-9edc-fe70ff7e4cd6-run-httpd\") pod \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.390516 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-scripts\") pod \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.393841 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01072c78-f902-48e8-9edc-fe70ff7e4cd6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "01072c78-f902-48e8-9edc-fe70ff7e4cd6" (UID: "01072c78-f902-48e8-9edc-fe70ff7e4cd6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.398761 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-scripts" (OuterVolumeSpecName: "scripts") pod "01072c78-f902-48e8-9edc-fe70ff7e4cd6" (UID: "01072c78-f902-48e8-9edc-fe70ff7e4cd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.482241 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "01072c78-f902-48e8-9edc-fe70ff7e4cd6" (UID: "01072c78-f902-48e8-9edc-fe70ff7e4cd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.493553 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-config-data\") pod \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.493598 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-ceilometer-tls-certs\") pod \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.493974 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01072c78-f902-48e8-9edc-fe70ff7e4cd6-log-httpd\") pod \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.494759 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01072c78-f902-48e8-9edc-fe70ff7e4cd6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "01072c78-f902-48e8-9edc-fe70ff7e4cd6" (UID: "01072c78-f902-48e8-9edc-fe70ff7e4cd6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.495520 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-sg-core-conf-yaml\") pod \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.496003 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pwb5\" (UniqueName: \"kubernetes.io/projected/01072c78-f902-48e8-9edc-fe70ff7e4cd6-kube-api-access-4pwb5\") pod \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\" (UID: \"01072c78-f902-48e8-9edc-fe70ff7e4cd6\") " Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.497514 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.497535 4731 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01072c78-f902-48e8-9edc-fe70ff7e4cd6-run-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.497543 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.497552 4731 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/01072c78-f902-48e8-9edc-fe70ff7e4cd6-log-httpd\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.498639 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01072c78-f902-48e8-9edc-fe70ff7e4cd6-kube-api-access-4pwb5" (OuterVolumeSpecName: "kube-api-access-4pwb5") pod "01072c78-f902-48e8-9edc-fe70ff7e4cd6" (UID: "01072c78-f902-48e8-9edc-fe70ff7e4cd6"). InnerVolumeSpecName "kube-api-access-4pwb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.523540 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "01072c78-f902-48e8-9edc-fe70ff7e4cd6" (UID: "01072c78-f902-48e8-9edc-fe70ff7e4cd6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.544602 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "01072c78-f902-48e8-9edc-fe70ff7e4cd6" (UID: "01072c78-f902-48e8-9edc-fe70ff7e4cd6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.587550 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-config-data" (OuterVolumeSpecName: "config-data") pod "01072c78-f902-48e8-9edc-fe70ff7e4cd6" (UID: "01072c78-f902-48e8-9edc-fe70ff7e4cd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.599231 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pwb5\" (UniqueName: \"kubernetes.io/projected/01072c78-f902-48e8-9edc-fe70ff7e4cd6-kube-api-access-4pwb5\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.599258 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.599274 4731 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.599284 4731 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/01072c78-f902-48e8-9edc-fe70ff7e4cd6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.701743 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9b0af9b9-3996-4d68-b7b4-9f83fd904609","Type":"ContainerStarted","Data":"79fed4044b49b57992e4ad421409485d008fa48b4c310181a5169848495be4bb"} Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.702222 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerName="aodh-listener" containerID="cri-o://79fed4044b49b57992e4ad421409485d008fa48b4c310181a5169848495be4bb" gracePeriod=30 Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.702252 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerName="aodh-evaluator" containerID="cri-o://52a8e3d91483997a93a45ad19098ecaf4bc161f764c95bdbf2122aa97863537c" gracePeriod=30 Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.702223 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerName="aodh-notifier" containerID="cri-o://068a73a86d26dbde95ae9fdec4df7da4d00803353bd1eba92d297120aaabbc6d" gracePeriod=30 Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.702245 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerName="aodh-api" containerID="cri-o://d5f337a21657d4370fedfab922e745c5360c916a020f8d117b164a6feb9f40c2" gracePeriod=30 Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.706288 4731 generic.go:334] "Generic (PLEG): container finished" podID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerID="9cc31ee5b7243ca3b583960f54fc72cf8286e6bd09e22610cb6dee8052aac481" exitCode=0 Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.706446 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.706459 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01072c78-f902-48e8-9edc-fe70ff7e4cd6","Type":"ContainerDied","Data":"9cc31ee5b7243ca3b583960f54fc72cf8286e6bd09e22610cb6dee8052aac481"} Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.706498 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"01072c78-f902-48e8-9edc-fe70ff7e4cd6","Type":"ContainerDied","Data":"69c5066ca534d700df759681d26a9cefc834bf993bcc542ace56c40d320f8995"} Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.706524 4731 scope.go:117] "RemoveContainer" containerID="ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.732217 4731 scope.go:117] "RemoveContainer" containerID="cda2b7b7808982818578d0015a4b810d320e8292e7d98cab1d376d65dd3b257d" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.763498 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.9029699989999997 podStartE2EDuration="8.763478488s" podCreationTimestamp="2025-11-25 15:44:59 +0000 UTC" firstStartedPulling="2025-11-25 15:45:00.877185954 +0000 UTC m=+3059.624320374" lastFinishedPulling="2025-11-25 15:45:06.737694433 +0000 UTC m=+3065.484828863" observedRunningTime="2025-11-25 15:45:07.728209952 +0000 UTC m=+3066.475344372" watchObservedRunningTime="2025-11-25 15:45:07.763478488 +0000 UTC m=+3066.510612928" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.783403 4731 scope.go:117] "RemoveContainer" containerID="9cc31ee5b7243ca3b583960f54fc72cf8286e6bd09e22610cb6dee8052aac481" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.799077 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.824152 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.832096 4731 scope.go:117] "RemoveContainer" containerID="116eee04434674afaa80d1b1141b9b3893e1d5365f2a388274bf125e71abb467" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.837066 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:45:07 crc kubenswrapper[4731]: E1125 15:45:07.837625 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="ceilometer-central-agent" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.837643 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="ceilometer-central-agent" Nov 25 15:45:07 crc kubenswrapper[4731]: E1125 15:45:07.837657 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4945ca15-2ad5-4d61-af56-eed68ff2968d" containerName="collect-profiles" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.837664 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="4945ca15-2ad5-4d61-af56-eed68ff2968d" containerName="collect-profiles" Nov 25 15:45:07 crc kubenswrapper[4731]: E1125 15:45:07.837707 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="ceilometer-notification-agent" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.837714 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="ceilometer-notification-agent" Nov 25 15:45:07 crc kubenswrapper[4731]: E1125 15:45:07.837730 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="proxy-httpd" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.837737 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="proxy-httpd" Nov 25 15:45:07 crc kubenswrapper[4731]: E1125 15:45:07.837751 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="sg-core" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.837758 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="sg-core" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.838034 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="4945ca15-2ad5-4d61-af56-eed68ff2968d" containerName="collect-profiles" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.838064 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="ceilometer-notification-agent" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.838076 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="ceilometer-central-agent" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.838091 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="sg-core" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.838111 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" containerName="proxy-httpd" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.845700 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.848279 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.848524 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.848643 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.858739 4731 scope.go:117] "RemoveContainer" containerID="ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.859773 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:45:07 crc kubenswrapper[4731]: E1125 15:45:07.865334 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656\": container with ID starting with ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656 not found: ID does not exist" containerID="ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.865384 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656"} err="failed to get container status \"ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656\": rpc error: code = NotFound desc = could not find container \"ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656\": container with ID starting with ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656 not found: ID does not exist" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.865418 4731 scope.go:117] "RemoveContainer" containerID="cda2b7b7808982818578d0015a4b810d320e8292e7d98cab1d376d65dd3b257d" Nov 25 15:45:07 crc kubenswrapper[4731]: E1125 15:45:07.868100 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cda2b7b7808982818578d0015a4b810d320e8292e7d98cab1d376d65dd3b257d\": container with ID starting with cda2b7b7808982818578d0015a4b810d320e8292e7d98cab1d376d65dd3b257d not found: ID does not exist" containerID="cda2b7b7808982818578d0015a4b810d320e8292e7d98cab1d376d65dd3b257d" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.868167 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cda2b7b7808982818578d0015a4b810d320e8292e7d98cab1d376d65dd3b257d"} err="failed to get container status \"cda2b7b7808982818578d0015a4b810d320e8292e7d98cab1d376d65dd3b257d\": rpc error: code = NotFound desc = could not find container \"cda2b7b7808982818578d0015a4b810d320e8292e7d98cab1d376d65dd3b257d\": container with ID starting with cda2b7b7808982818578d0015a4b810d320e8292e7d98cab1d376d65dd3b257d not found: ID does not exist" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.868216 4731 scope.go:117] "RemoveContainer" containerID="9cc31ee5b7243ca3b583960f54fc72cf8286e6bd09e22610cb6dee8052aac481" Nov 25 15:45:07 crc kubenswrapper[4731]: E1125 15:45:07.869733 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cc31ee5b7243ca3b583960f54fc72cf8286e6bd09e22610cb6dee8052aac481\": container with ID starting with 9cc31ee5b7243ca3b583960f54fc72cf8286e6bd09e22610cb6dee8052aac481 not found: ID does not exist" containerID="9cc31ee5b7243ca3b583960f54fc72cf8286e6bd09e22610cb6dee8052aac481" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.869790 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cc31ee5b7243ca3b583960f54fc72cf8286e6bd09e22610cb6dee8052aac481"} err="failed to get container status \"9cc31ee5b7243ca3b583960f54fc72cf8286e6bd09e22610cb6dee8052aac481\": rpc error: code = NotFound desc = could not find container \"9cc31ee5b7243ca3b583960f54fc72cf8286e6bd09e22610cb6dee8052aac481\": container with ID starting with 9cc31ee5b7243ca3b583960f54fc72cf8286e6bd09e22610cb6dee8052aac481 not found: ID does not exist" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.869905 4731 scope.go:117] "RemoveContainer" containerID="116eee04434674afaa80d1b1141b9b3893e1d5365f2a388274bf125e71abb467" Nov 25 15:45:07 crc kubenswrapper[4731]: E1125 15:45:07.875549 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"116eee04434674afaa80d1b1141b9b3893e1d5365f2a388274bf125e71abb467\": container with ID starting with 116eee04434674afaa80d1b1141b9b3893e1d5365f2a388274bf125e71abb467 not found: ID does not exist" containerID="116eee04434674afaa80d1b1141b9b3893e1d5365f2a388274bf125e71abb467" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.875605 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"116eee04434674afaa80d1b1141b9b3893e1d5365f2a388274bf125e71abb467"} err="failed to get container status \"116eee04434674afaa80d1b1141b9b3893e1d5365f2a388274bf125e71abb467\": rpc error: code = NotFound desc = could not find container \"116eee04434674afaa80d1b1141b9b3893e1d5365f2a388274bf125e71abb467\": container with ID starting with 116eee04434674afaa80d1b1141b9b3893e1d5365f2a388274bf125e71abb467 not found: ID does not exist" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.903799 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d6f5688-5558-47ce-87a9-01d3f428c541-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.904021 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d6f5688-5558-47ce-87a9-01d3f428c541-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.904073 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d6f5688-5558-47ce-87a9-01d3f428c541-config-data\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.904107 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d6f5688-5558-47ce-87a9-01d3f428c541-scripts\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.904130 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swm9r\" (UniqueName: \"kubernetes.io/projected/2d6f5688-5558-47ce-87a9-01d3f428c541-kube-api-access-swm9r\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.904164 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d6f5688-5558-47ce-87a9-01d3f428c541-log-httpd\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.904217 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d6f5688-5558-47ce-87a9-01d3f428c541-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:07 crc kubenswrapper[4731]: I1125 15:45:07.904297 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d6f5688-5558-47ce-87a9-01d3f428c541-run-httpd\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.008033 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d6f5688-5558-47ce-87a9-01d3f428c541-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.008098 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d6f5688-5558-47ce-87a9-01d3f428c541-config-data\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.008127 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d6f5688-5558-47ce-87a9-01d3f428c541-scripts\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.008148 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swm9r\" (UniqueName: \"kubernetes.io/projected/2d6f5688-5558-47ce-87a9-01d3f428c541-kube-api-access-swm9r\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.008176 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d6f5688-5558-47ce-87a9-01d3f428c541-log-httpd\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.008205 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d6f5688-5558-47ce-87a9-01d3f428c541-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.008241 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d6f5688-5558-47ce-87a9-01d3f428c541-run-httpd\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.008339 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d6f5688-5558-47ce-87a9-01d3f428c541-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.012567 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d6f5688-5558-47ce-87a9-01d3f428c541-run-httpd\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.013307 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d6f5688-5558-47ce-87a9-01d3f428c541-log-httpd\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.013797 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d6f5688-5558-47ce-87a9-01d3f428c541-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.019217 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d6f5688-5558-47ce-87a9-01d3f428c541-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.033680 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d6f5688-5558-47ce-87a9-01d3f428c541-config-data\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.042404 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d6f5688-5558-47ce-87a9-01d3f428c541-scripts\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.058086 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d6f5688-5558-47ce-87a9-01d3f428c541-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.058415 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swm9r\" (UniqueName: \"kubernetes.io/projected/2d6f5688-5558-47ce-87a9-01d3f428c541-kube-api-access-swm9r\") pod \"ceilometer-0\" (UID: \"2d6f5688-5558-47ce-87a9-01d3f428c541\") " pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.177422 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.686674 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.729412 4731 generic.go:334] "Generic (PLEG): container finished" podID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerID="068a73a86d26dbde95ae9fdec4df7da4d00803353bd1eba92d297120aaabbc6d" exitCode=0 Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.729456 4731 generic.go:334] "Generic (PLEG): container finished" podID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerID="52a8e3d91483997a93a45ad19098ecaf4bc161f764c95bdbf2122aa97863537c" exitCode=0 Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.729465 4731 generic.go:334] "Generic (PLEG): container finished" podID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerID="d5f337a21657d4370fedfab922e745c5360c916a020f8d117b164a6feb9f40c2" exitCode=0 Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.729514 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9b0af9b9-3996-4d68-b7b4-9f83fd904609","Type":"ContainerDied","Data":"068a73a86d26dbde95ae9fdec4df7da4d00803353bd1eba92d297120aaabbc6d"} Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.729542 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9b0af9b9-3996-4d68-b7b4-9f83fd904609","Type":"ContainerDied","Data":"52a8e3d91483997a93a45ad19098ecaf4bc161f764c95bdbf2122aa97863537c"} Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.729554 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9b0af9b9-3996-4d68-b7b4-9f83fd904609","Type":"ContainerDied","Data":"d5f337a21657d4370fedfab922e745c5360c916a020f8d117b164a6feb9f40c2"} Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.730860 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d6f5688-5558-47ce-87a9-01d3f428c541","Type":"ContainerStarted","Data":"13123585230b322f667ac8772e1ccaca1ec329a0349b85dee955f1a5f86bf43a"} Nov 25 15:45:08 crc kubenswrapper[4731]: I1125 15:45:08.824383 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01072c78-f902-48e8-9edc-fe70ff7e4cd6" path="/var/lib/kubelet/pods/01072c78-f902-48e8-9edc-fe70ff7e4cd6/volumes" Nov 25 15:45:09 crc kubenswrapper[4731]: I1125 15:45:09.741751 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d6f5688-5558-47ce-87a9-01d3f428c541","Type":"ContainerStarted","Data":"f1ccc4328d2ef0f2a25563ec72f211c3a67cec11f80a4b41d72aa17e8d641bb2"} Nov 25 15:45:09 crc kubenswrapper[4731]: I1125 15:45:09.811717 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:45:09 crc kubenswrapper[4731]: E1125 15:45:09.812344 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:45:10 crc kubenswrapper[4731]: E1125 15:45:10.167240 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01072c78_f902_48e8_9edc_fe70ff7e4cd6.slice/crio-conmon-ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01072c78_f902_48e8_9edc_fe70ff7e4cd6.slice/crio-ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:45:10 crc kubenswrapper[4731]: I1125 15:45:10.760776 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d6f5688-5558-47ce-87a9-01d3f428c541","Type":"ContainerStarted","Data":"700ee74eb6e5185bdf171d973ef7229b1a77afd5b119e15dd85fd0354fbbbf12"} Nov 25 15:45:10 crc kubenswrapper[4731]: I1125 15:45:10.761122 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d6f5688-5558-47ce-87a9-01d3f428c541","Type":"ContainerStarted","Data":"5c912d9bfd8ee66fb3c870f2c4bf08ec9be4c50cc5a355b4d9e40dc3cf73da39"} Nov 25 15:45:12 crc kubenswrapper[4731]: I1125 15:45:12.782739 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d6f5688-5558-47ce-87a9-01d3f428c541","Type":"ContainerStarted","Data":"f5c9d90ac276d541835c11a4ffbae4996fdd61d3b0885d5997902587e9087ad2"} Nov 25 15:45:12 crc kubenswrapper[4731]: I1125 15:45:12.783543 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Nov 25 15:45:12 crc kubenswrapper[4731]: I1125 15:45:12.818995 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.451674124 podStartE2EDuration="5.818964026s" podCreationTimestamp="2025-11-25 15:45:07 +0000 UTC" firstStartedPulling="2025-11-25 15:45:08.682273017 +0000 UTC m=+3067.429407447" lastFinishedPulling="2025-11-25 15:45:12.049562919 +0000 UTC m=+3070.796697349" observedRunningTime="2025-11-25 15:45:12.805676215 +0000 UTC m=+3071.552810665" watchObservedRunningTime="2025-11-25 15:45:12.818964026 +0000 UTC m=+3071.566098476" Nov 25 15:45:16 crc kubenswrapper[4731]: I1125 15:45:16.331179 4731 scope.go:117] "RemoveContainer" containerID="391b7e4f3aa1fbfd06417c7457824396f9e385a4423dae4d72a8c1028f703b26" Nov 25 15:45:20 crc kubenswrapper[4731]: E1125 15:45:20.428332 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01072c78_f902_48e8_9edc_fe70ff7e4cd6.slice/crio-ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01072c78_f902_48e8_9edc_fe70ff7e4cd6.slice/crio-conmon-ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:45:22 crc kubenswrapper[4731]: I1125 15:45:22.817920 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:45:22 crc kubenswrapper[4731]: E1125 15:45:22.819046 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:45:30 crc kubenswrapper[4731]: E1125 15:45:30.681740 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01072c78_f902_48e8_9edc_fe70ff7e4cd6.slice/crio-conmon-ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01072c78_f902_48e8_9edc_fe70ff7e4cd6.slice/crio-ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:45:35 crc kubenswrapper[4731]: I1125 15:45:35.811558 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:45:35 crc kubenswrapper[4731]: E1125 15:45:35.812337 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:45:38 crc kubenswrapper[4731]: I1125 15:45:38.012528 4731 generic.go:334] "Generic (PLEG): container finished" podID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerID="79fed4044b49b57992e4ad421409485d008fa48b4c310181a5169848495be4bb" exitCode=137 Nov 25 15:45:38 crc kubenswrapper[4731]: I1125 15:45:38.012607 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9b0af9b9-3996-4d68-b7b4-9f83fd904609","Type":"ContainerDied","Data":"79fed4044b49b57992e4ad421409485d008fa48b4c310181a5169848495be4bb"} Nov 25 15:45:38 crc kubenswrapper[4731]: I1125 15:45:38.148718 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:45:38 crc kubenswrapper[4731]: I1125 15:45:38.185635 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Nov 25 15:45:38 crc kubenswrapper[4731]: I1125 15:45:38.235173 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-scripts\") pod \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\" (UID: \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\") " Nov 25 15:45:38 crc kubenswrapper[4731]: I1125 15:45:38.235287 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-combined-ca-bundle\") pod \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\" (UID: \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\") " Nov 25 15:45:38 crc kubenswrapper[4731]: I1125 15:45:38.236465 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-config-data\") pod \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\" (UID: \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\") " Nov 25 15:45:38 crc kubenswrapper[4731]: I1125 15:45:38.236636 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfc5d\" (UniqueName: \"kubernetes.io/projected/9b0af9b9-3996-4d68-b7b4-9f83fd904609-kube-api-access-wfc5d\") pod \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\" (UID: \"9b0af9b9-3996-4d68-b7b4-9f83fd904609\") " Nov 25 15:45:38 crc kubenswrapper[4731]: I1125 15:45:38.242877 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b0af9b9-3996-4d68-b7b4-9f83fd904609-kube-api-access-wfc5d" (OuterVolumeSpecName: "kube-api-access-wfc5d") pod "9b0af9b9-3996-4d68-b7b4-9f83fd904609" (UID: "9b0af9b9-3996-4d68-b7b4-9f83fd904609"). InnerVolumeSpecName "kube-api-access-wfc5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:45:38 crc kubenswrapper[4731]: I1125 15:45:38.287283 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-scripts" (OuterVolumeSpecName: "scripts") pod "9b0af9b9-3996-4d68-b7b4-9f83fd904609" (UID: "9b0af9b9-3996-4d68-b7b4-9f83fd904609"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:45:38 crc kubenswrapper[4731]: I1125 15:45:38.345692 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfc5d\" (UniqueName: \"kubernetes.io/projected/9b0af9b9-3996-4d68-b7b4-9f83fd904609-kube-api-access-wfc5d\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:38 crc kubenswrapper[4731]: I1125 15:45:38.345722 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:38 crc kubenswrapper[4731]: I1125 15:45:38.407212 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-config-data" (OuterVolumeSpecName: "config-data") pod "9b0af9b9-3996-4d68-b7b4-9f83fd904609" (UID: "9b0af9b9-3996-4d68-b7b4-9f83fd904609"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:45:38 crc kubenswrapper[4731]: I1125 15:45:38.416646 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b0af9b9-3996-4d68-b7b4-9f83fd904609" (UID: "9b0af9b9-3996-4d68-b7b4-9f83fd904609"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:45:38 crc kubenswrapper[4731]: I1125 15:45:38.448150 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:38 crc kubenswrapper[4731]: I1125 15:45:38.448202 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b0af9b9-3996-4d68-b7b4-9f83fd904609-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.025375 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9b0af9b9-3996-4d68-b7b4-9f83fd904609","Type":"ContainerDied","Data":"a5781586d27049370c0a6737f33896254481d7bf4e6ff6dfc5bce680838c97ea"} Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.025436 4731 scope.go:117] "RemoveContainer" containerID="79fed4044b49b57992e4ad421409485d008fa48b4c310181a5169848495be4bb" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.025596 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.054326 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.060342 4731 scope.go:117] "RemoveContainer" containerID="068a73a86d26dbde95ae9fdec4df7da4d00803353bd1eba92d297120aaabbc6d" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.104630 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.106836 4731 scope.go:117] "RemoveContainer" containerID="52a8e3d91483997a93a45ad19098ecaf4bc161f764c95bdbf2122aa97863537c" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.117941 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 25 15:45:39 crc kubenswrapper[4731]: E1125 15:45:39.118368 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerName="aodh-evaluator" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.118393 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerName="aodh-evaluator" Nov 25 15:45:39 crc kubenswrapper[4731]: E1125 15:45:39.118416 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerName="aodh-listener" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.118425 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerName="aodh-listener" Nov 25 15:45:39 crc kubenswrapper[4731]: E1125 15:45:39.118435 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerName="aodh-notifier" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.118441 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerName="aodh-notifier" Nov 25 15:45:39 crc kubenswrapper[4731]: E1125 15:45:39.118454 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerName="aodh-api" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.118460 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerName="aodh-api" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.118852 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerName="aodh-evaluator" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.118877 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerName="aodh-api" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.118896 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerName="aodh-notifier" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.118907 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" containerName="aodh-listener" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.123127 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.128783 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-h9h4w" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.129155 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.129415 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.129640 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.129937 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.131656 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.142362 4731 scope.go:117] "RemoveContainer" containerID="d5f337a21657d4370fedfab922e745c5360c916a020f8d117b164a6feb9f40c2" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.265411 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.265463 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-internal-tls-certs\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.265501 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-config-data\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.265598 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-scripts\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.265619 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-public-tls-certs\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.265646 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht55p\" (UniqueName: \"kubernetes.io/projected/7a14477f-8284-44d8-8f6d-67f5efec9f52-kube-api-access-ht55p\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.367059 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-scripts\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.367106 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-public-tls-certs\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.367132 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht55p\" (UniqueName: \"kubernetes.io/projected/7a14477f-8284-44d8-8f6d-67f5efec9f52-kube-api-access-ht55p\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.367189 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.367215 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-internal-tls-certs\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.367240 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-config-data\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.372469 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-scripts\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.372688 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-config-data\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.375385 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-combined-ca-bundle\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.375714 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-public-tls-certs\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.379390 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-internal-tls-certs\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.382949 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht55p\" (UniqueName: \"kubernetes.io/projected/7a14477f-8284-44d8-8f6d-67f5efec9f52-kube-api-access-ht55p\") pod \"aodh-0\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " pod="openstack/aodh-0" Nov 25 15:45:39 crc kubenswrapper[4731]: I1125 15:45:39.456860 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:45:40 crc kubenswrapper[4731]: I1125 15:45:40.049196 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 15:45:40 crc kubenswrapper[4731]: I1125 15:45:40.826559 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b0af9b9-3996-4d68-b7b4-9f83fd904609" path="/var/lib/kubelet/pods/9b0af9b9-3996-4d68-b7b4-9f83fd904609/volumes" Nov 25 15:45:40 crc kubenswrapper[4731]: E1125 15:45:40.923893 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01072c78_f902_48e8_9edc_fe70ff7e4cd6.slice/crio-conmon-ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01072c78_f902_48e8_9edc_fe70ff7e4cd6.slice/crio-ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:45:41 crc kubenswrapper[4731]: I1125 15:45:41.050575 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7a14477f-8284-44d8-8f6d-67f5efec9f52","Type":"ContainerStarted","Data":"3132c7b7bd744f60e07edef53dc0d43c075d211490acb44a6653d9efe3c7f0c3"} Nov 25 15:45:41 crc kubenswrapper[4731]: I1125 15:45:41.050627 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7a14477f-8284-44d8-8f6d-67f5efec9f52","Type":"ContainerStarted","Data":"3a2c823c3fc992f7bf53a1dd842b87ba0f1d845e1536cd2d6214707a5d4f548d"} Nov 25 15:45:42 crc kubenswrapper[4731]: I1125 15:45:42.069602 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7a14477f-8284-44d8-8f6d-67f5efec9f52","Type":"ContainerStarted","Data":"c8a8333e920150e83fd0bbc497123a972502a772a8d84239b8ce853975bec739"} Nov 25 15:45:43 crc kubenswrapper[4731]: I1125 15:45:43.082112 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7a14477f-8284-44d8-8f6d-67f5efec9f52","Type":"ContainerStarted","Data":"0e36718173a6401b91afa789ac9ee20a2ed4c986e2ec44aaed9ad7c2e9f04d9e"} Nov 25 15:45:43 crc kubenswrapper[4731]: I1125 15:45:43.083454 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7a14477f-8284-44d8-8f6d-67f5efec9f52","Type":"ContainerStarted","Data":"ab1012869a3ac8e7186a4f726c1f7b8efc2edc50a68924fbc07d6b2e7c2e4266"} Nov 25 15:45:43 crc kubenswrapper[4731]: I1125 15:45:43.122218 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.655489124 podStartE2EDuration="4.122193461s" podCreationTimestamp="2025-11-25 15:45:39 +0000 UTC" firstStartedPulling="2025-11-25 15:45:40.054463801 +0000 UTC m=+3098.801598231" lastFinishedPulling="2025-11-25 15:45:42.521168138 +0000 UTC m=+3101.268302568" observedRunningTime="2025-11-25 15:45:43.104916992 +0000 UTC m=+3101.852051422" watchObservedRunningTime="2025-11-25 15:45:43.122193461 +0000 UTC m=+3101.869327901" Nov 25 15:45:48 crc kubenswrapper[4731]: I1125 15:45:48.812133 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:45:48 crc kubenswrapper[4731]: E1125 15:45:48.813039 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:45:51 crc kubenswrapper[4731]: E1125 15:45:51.182949 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01072c78_f902_48e8_9edc_fe70ff7e4cd6.slice/crio-ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01072c78_f902_48e8_9edc_fe70ff7e4cd6.slice/crio-conmon-ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:45:53 crc kubenswrapper[4731]: I1125 15:45:53.465344 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-x7bgk"] Nov 25 15:45:53 crc kubenswrapper[4731]: I1125 15:45:53.468409 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:45:53 crc kubenswrapper[4731]: I1125 15:45:53.476098 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x7bgk"] Nov 25 15:45:53 crc kubenswrapper[4731]: I1125 15:45:53.565858 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt675\" (UniqueName: \"kubernetes.io/projected/64adc15c-bc40-4248-8e75-8dc89e3ad970-kube-api-access-gt675\") pod \"redhat-marketplace-x7bgk\" (UID: \"64adc15c-bc40-4248-8e75-8dc89e3ad970\") " pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:45:53 crc kubenswrapper[4731]: I1125 15:45:53.565948 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64adc15c-bc40-4248-8e75-8dc89e3ad970-catalog-content\") pod \"redhat-marketplace-x7bgk\" (UID: \"64adc15c-bc40-4248-8e75-8dc89e3ad970\") " pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:45:53 crc kubenswrapper[4731]: I1125 15:45:53.566220 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64adc15c-bc40-4248-8e75-8dc89e3ad970-utilities\") pod \"redhat-marketplace-x7bgk\" (UID: \"64adc15c-bc40-4248-8e75-8dc89e3ad970\") " pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:45:53 crc kubenswrapper[4731]: I1125 15:45:53.667837 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt675\" (UniqueName: \"kubernetes.io/projected/64adc15c-bc40-4248-8e75-8dc89e3ad970-kube-api-access-gt675\") pod \"redhat-marketplace-x7bgk\" (UID: \"64adc15c-bc40-4248-8e75-8dc89e3ad970\") " pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:45:53 crc kubenswrapper[4731]: I1125 15:45:53.667905 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64adc15c-bc40-4248-8e75-8dc89e3ad970-catalog-content\") pod \"redhat-marketplace-x7bgk\" (UID: \"64adc15c-bc40-4248-8e75-8dc89e3ad970\") " pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:45:53 crc kubenswrapper[4731]: I1125 15:45:53.667992 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64adc15c-bc40-4248-8e75-8dc89e3ad970-utilities\") pod \"redhat-marketplace-x7bgk\" (UID: \"64adc15c-bc40-4248-8e75-8dc89e3ad970\") " pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:45:53 crc kubenswrapper[4731]: I1125 15:45:53.668480 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64adc15c-bc40-4248-8e75-8dc89e3ad970-utilities\") pod \"redhat-marketplace-x7bgk\" (UID: \"64adc15c-bc40-4248-8e75-8dc89e3ad970\") " pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:45:53 crc kubenswrapper[4731]: I1125 15:45:53.668545 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64adc15c-bc40-4248-8e75-8dc89e3ad970-catalog-content\") pod \"redhat-marketplace-x7bgk\" (UID: \"64adc15c-bc40-4248-8e75-8dc89e3ad970\") " pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:45:53 crc kubenswrapper[4731]: I1125 15:45:53.691407 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt675\" (UniqueName: \"kubernetes.io/projected/64adc15c-bc40-4248-8e75-8dc89e3ad970-kube-api-access-gt675\") pod \"redhat-marketplace-x7bgk\" (UID: \"64adc15c-bc40-4248-8e75-8dc89e3ad970\") " pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:45:53 crc kubenswrapper[4731]: I1125 15:45:53.796198 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:45:54 crc kubenswrapper[4731]: I1125 15:45:54.297051 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-x7bgk"] Nov 25 15:45:54 crc kubenswrapper[4731]: W1125 15:45:54.307364 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64adc15c_bc40_4248_8e75_8dc89e3ad970.slice/crio-e1227bdb90061483717dd67d89a7859b5d4472ba1941708617657c65f22e8bcd WatchSource:0}: Error finding container e1227bdb90061483717dd67d89a7859b5d4472ba1941708617657c65f22e8bcd: Status 404 returned error can't find the container with id e1227bdb90061483717dd67d89a7859b5d4472ba1941708617657c65f22e8bcd Nov 25 15:45:55 crc kubenswrapper[4731]: I1125 15:45:55.196788 4731 generic.go:334] "Generic (PLEG): container finished" podID="64adc15c-bc40-4248-8e75-8dc89e3ad970" containerID="b47ed6b6c31c56abc49715313392ceac77e3f2017ba644159ddf36ad6cc48c70" exitCode=0 Nov 25 15:45:55 crc kubenswrapper[4731]: I1125 15:45:55.196870 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7bgk" event={"ID":"64adc15c-bc40-4248-8e75-8dc89e3ad970","Type":"ContainerDied","Data":"b47ed6b6c31c56abc49715313392ceac77e3f2017ba644159ddf36ad6cc48c70"} Nov 25 15:45:55 crc kubenswrapper[4731]: I1125 15:45:55.196956 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7bgk" event={"ID":"64adc15c-bc40-4248-8e75-8dc89e3ad970","Type":"ContainerStarted","Data":"e1227bdb90061483717dd67d89a7859b5d4472ba1941708617657c65f22e8bcd"} Nov 25 15:45:56 crc kubenswrapper[4731]: I1125 15:45:56.209365 4731 generic.go:334] "Generic (PLEG): container finished" podID="64adc15c-bc40-4248-8e75-8dc89e3ad970" containerID="2499ac785804dca209bc00ad8ff39e45a1ff2eaf5e57ee03efad806b75cf447c" exitCode=0 Nov 25 15:45:56 crc kubenswrapper[4731]: I1125 15:45:56.209464 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7bgk" event={"ID":"64adc15c-bc40-4248-8e75-8dc89e3ad970","Type":"ContainerDied","Data":"2499ac785804dca209bc00ad8ff39e45a1ff2eaf5e57ee03efad806b75cf447c"} Nov 25 15:45:57 crc kubenswrapper[4731]: I1125 15:45:57.221589 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7bgk" event={"ID":"64adc15c-bc40-4248-8e75-8dc89e3ad970","Type":"ContainerStarted","Data":"9ea8454737f292bec8b396b9146ee0021d79bca433195e20f4509a565048056c"} Nov 25 15:45:57 crc kubenswrapper[4731]: I1125 15:45:57.256590 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-x7bgk" podStartSLOduration=2.612588013 podStartE2EDuration="4.256566929s" podCreationTimestamp="2025-11-25 15:45:53 +0000 UTC" firstStartedPulling="2025-11-25 15:45:55.202592626 +0000 UTC m=+3113.949727056" lastFinishedPulling="2025-11-25 15:45:56.846571542 +0000 UTC m=+3115.593705972" observedRunningTime="2025-11-25 15:45:57.251583835 +0000 UTC m=+3115.998718265" watchObservedRunningTime="2025-11-25 15:45:57.256566929 +0000 UTC m=+3116.003701359" Nov 25 15:46:01 crc kubenswrapper[4731]: E1125 15:46:01.439230 4731 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01072c78_f902_48e8_9edc_fe70ff7e4cd6.slice/crio-ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01072c78_f902_48e8_9edc_fe70ff7e4cd6.slice/crio-conmon-ebcfccc244f8b7358347715c8602f44df1adad8bbec46e435ff5f81e6e234656.scope\": RecentStats: unable to find data in memory cache]" Nov 25 15:46:02 crc kubenswrapper[4731]: I1125 15:46:02.821859 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:46:02 crc kubenswrapper[4731]: E1125 15:46:02.822548 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:46:02 crc kubenswrapper[4731]: E1125 15:46:02.823908 4731 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/5b97d1af0ff9ebcea7a830cd5ac73cf13dfa4382be9071d60042ed1c735a51b2/diff" to get inode usage: stat /var/lib/containers/storage/overlay/5b97d1af0ff9ebcea7a830cd5ac73cf13dfa4382be9071d60042ed1c735a51b2/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/openstack_ceilometer-0_01072c78-f902-48e8-9edc-fe70ff7e4cd6/proxy-httpd/0.log" to get inode usage: stat /var/log/pods/openstack_ceilometer-0_01072c78-f902-48e8-9edc-fe70ff7e4cd6/proxy-httpd/0.log: no such file or directory Nov 25 15:46:03 crc kubenswrapper[4731]: I1125 15:46:03.796342 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:46:03 crc kubenswrapper[4731]: I1125 15:46:03.796636 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:46:03 crc kubenswrapper[4731]: I1125 15:46:03.840334 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:46:04 crc kubenswrapper[4731]: I1125 15:46:04.367901 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:46:04 crc kubenswrapper[4731]: I1125 15:46:04.410841 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x7bgk"] Nov 25 15:46:06 crc kubenswrapper[4731]: I1125 15:46:06.340518 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-x7bgk" podUID="64adc15c-bc40-4248-8e75-8dc89e3ad970" containerName="registry-server" containerID="cri-o://9ea8454737f292bec8b396b9146ee0021d79bca433195e20f4509a565048056c" gracePeriod=2 Nov 25 15:46:06 crc kubenswrapper[4731]: I1125 15:46:06.855535 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:46:06 crc kubenswrapper[4731]: I1125 15:46:06.914325 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt675\" (UniqueName: \"kubernetes.io/projected/64adc15c-bc40-4248-8e75-8dc89e3ad970-kube-api-access-gt675\") pod \"64adc15c-bc40-4248-8e75-8dc89e3ad970\" (UID: \"64adc15c-bc40-4248-8e75-8dc89e3ad970\") " Nov 25 15:46:06 crc kubenswrapper[4731]: I1125 15:46:06.921007 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64adc15c-bc40-4248-8e75-8dc89e3ad970-kube-api-access-gt675" (OuterVolumeSpecName: "kube-api-access-gt675") pod "64adc15c-bc40-4248-8e75-8dc89e3ad970" (UID: "64adc15c-bc40-4248-8e75-8dc89e3ad970"). InnerVolumeSpecName "kube-api-access-gt675". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.016603 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64adc15c-bc40-4248-8e75-8dc89e3ad970-utilities\") pod \"64adc15c-bc40-4248-8e75-8dc89e3ad970\" (UID: \"64adc15c-bc40-4248-8e75-8dc89e3ad970\") " Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.016677 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64adc15c-bc40-4248-8e75-8dc89e3ad970-catalog-content\") pod \"64adc15c-bc40-4248-8e75-8dc89e3ad970\" (UID: \"64adc15c-bc40-4248-8e75-8dc89e3ad970\") " Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.017160 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt675\" (UniqueName: \"kubernetes.io/projected/64adc15c-bc40-4248-8e75-8dc89e3ad970-kube-api-access-gt675\") on node \"crc\" DevicePath \"\"" Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.017605 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64adc15c-bc40-4248-8e75-8dc89e3ad970-utilities" (OuterVolumeSpecName: "utilities") pod "64adc15c-bc40-4248-8e75-8dc89e3ad970" (UID: "64adc15c-bc40-4248-8e75-8dc89e3ad970"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.036027 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64adc15c-bc40-4248-8e75-8dc89e3ad970-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "64adc15c-bc40-4248-8e75-8dc89e3ad970" (UID: "64adc15c-bc40-4248-8e75-8dc89e3ad970"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.119091 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64adc15c-bc40-4248-8e75-8dc89e3ad970-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.119140 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64adc15c-bc40-4248-8e75-8dc89e3ad970-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.351570 4731 generic.go:334] "Generic (PLEG): container finished" podID="64adc15c-bc40-4248-8e75-8dc89e3ad970" containerID="9ea8454737f292bec8b396b9146ee0021d79bca433195e20f4509a565048056c" exitCode=0 Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.351616 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7bgk" event={"ID":"64adc15c-bc40-4248-8e75-8dc89e3ad970","Type":"ContainerDied","Data":"9ea8454737f292bec8b396b9146ee0021d79bca433195e20f4509a565048056c"} Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.351648 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-x7bgk" event={"ID":"64adc15c-bc40-4248-8e75-8dc89e3ad970","Type":"ContainerDied","Data":"e1227bdb90061483717dd67d89a7859b5d4472ba1941708617657c65f22e8bcd"} Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.351647 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-x7bgk" Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.351665 4731 scope.go:117] "RemoveContainer" containerID="9ea8454737f292bec8b396b9146ee0021d79bca433195e20f4509a565048056c" Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.376109 4731 scope.go:117] "RemoveContainer" containerID="2499ac785804dca209bc00ad8ff39e45a1ff2eaf5e57ee03efad806b75cf447c" Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.393942 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-x7bgk"] Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.404536 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-x7bgk"] Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.411375 4731 scope.go:117] "RemoveContainer" containerID="b47ed6b6c31c56abc49715313392ceac77e3f2017ba644159ddf36ad6cc48c70" Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.454856 4731 scope.go:117] "RemoveContainer" containerID="9ea8454737f292bec8b396b9146ee0021d79bca433195e20f4509a565048056c" Nov 25 15:46:07 crc kubenswrapper[4731]: E1125 15:46:07.455390 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ea8454737f292bec8b396b9146ee0021d79bca433195e20f4509a565048056c\": container with ID starting with 9ea8454737f292bec8b396b9146ee0021d79bca433195e20f4509a565048056c not found: ID does not exist" containerID="9ea8454737f292bec8b396b9146ee0021d79bca433195e20f4509a565048056c" Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.455457 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ea8454737f292bec8b396b9146ee0021d79bca433195e20f4509a565048056c"} err="failed to get container status \"9ea8454737f292bec8b396b9146ee0021d79bca433195e20f4509a565048056c\": rpc error: code = NotFound desc = could not find container \"9ea8454737f292bec8b396b9146ee0021d79bca433195e20f4509a565048056c\": container with ID starting with 9ea8454737f292bec8b396b9146ee0021d79bca433195e20f4509a565048056c not found: ID does not exist" Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.455520 4731 scope.go:117] "RemoveContainer" containerID="2499ac785804dca209bc00ad8ff39e45a1ff2eaf5e57ee03efad806b75cf447c" Nov 25 15:46:07 crc kubenswrapper[4731]: E1125 15:46:07.455919 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2499ac785804dca209bc00ad8ff39e45a1ff2eaf5e57ee03efad806b75cf447c\": container with ID starting with 2499ac785804dca209bc00ad8ff39e45a1ff2eaf5e57ee03efad806b75cf447c not found: ID does not exist" containerID="2499ac785804dca209bc00ad8ff39e45a1ff2eaf5e57ee03efad806b75cf447c" Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.455942 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2499ac785804dca209bc00ad8ff39e45a1ff2eaf5e57ee03efad806b75cf447c"} err="failed to get container status \"2499ac785804dca209bc00ad8ff39e45a1ff2eaf5e57ee03efad806b75cf447c\": rpc error: code = NotFound desc = could not find container \"2499ac785804dca209bc00ad8ff39e45a1ff2eaf5e57ee03efad806b75cf447c\": container with ID starting with 2499ac785804dca209bc00ad8ff39e45a1ff2eaf5e57ee03efad806b75cf447c not found: ID does not exist" Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.455957 4731 scope.go:117] "RemoveContainer" containerID="b47ed6b6c31c56abc49715313392ceac77e3f2017ba644159ddf36ad6cc48c70" Nov 25 15:46:07 crc kubenswrapper[4731]: E1125 15:46:07.456735 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b47ed6b6c31c56abc49715313392ceac77e3f2017ba644159ddf36ad6cc48c70\": container with ID starting with b47ed6b6c31c56abc49715313392ceac77e3f2017ba644159ddf36ad6cc48c70 not found: ID does not exist" containerID="b47ed6b6c31c56abc49715313392ceac77e3f2017ba644159ddf36ad6cc48c70" Nov 25 15:46:07 crc kubenswrapper[4731]: I1125 15:46:07.456776 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b47ed6b6c31c56abc49715313392ceac77e3f2017ba644159ddf36ad6cc48c70"} err="failed to get container status \"b47ed6b6c31c56abc49715313392ceac77e3f2017ba644159ddf36ad6cc48c70\": rpc error: code = NotFound desc = could not find container \"b47ed6b6c31c56abc49715313392ceac77e3f2017ba644159ddf36ad6cc48c70\": container with ID starting with b47ed6b6c31c56abc49715313392ceac77e3f2017ba644159ddf36ad6cc48c70 not found: ID does not exist" Nov 25 15:46:08 crc kubenswrapper[4731]: I1125 15:46:08.822017 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64adc15c-bc40-4248-8e75-8dc89e3ad970" path="/var/lib/kubelet/pods/64adc15c-bc40-4248-8e75-8dc89e3ad970/volumes" Nov 25 15:46:16 crc kubenswrapper[4731]: I1125 15:46:16.811533 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:46:16 crc kubenswrapper[4731]: E1125 15:46:16.812388 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:46:29 crc kubenswrapper[4731]: I1125 15:46:29.812347 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:46:29 crc kubenswrapper[4731]: E1125 15:46:29.813717 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:46:41 crc kubenswrapper[4731]: I1125 15:46:41.817803 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:46:41 crc kubenswrapper[4731]: E1125 15:46:41.818900 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:46:55 crc kubenswrapper[4731]: I1125 15:46:55.811999 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:46:55 crc kubenswrapper[4731]: E1125 15:46:55.814067 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:47:06 crc kubenswrapper[4731]: I1125 15:47:06.812592 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:47:06 crc kubenswrapper[4731]: E1125 15:47:06.813585 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:47:20 crc kubenswrapper[4731]: I1125 15:47:20.812772 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:47:20 crc kubenswrapper[4731]: E1125 15:47:20.814413 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:47:33 crc kubenswrapper[4731]: I1125 15:47:33.812030 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:47:33 crc kubenswrapper[4731]: E1125 15:47:33.814187 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:47:45 crc kubenswrapper[4731]: I1125 15:47:45.811703 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:47:45 crc kubenswrapper[4731]: E1125 15:47:45.813238 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:47:57 crc kubenswrapper[4731]: I1125 15:47:57.812231 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:47:57 crc kubenswrapper[4731]: E1125 15:47:57.813101 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:48:08 crc kubenswrapper[4731]: I1125 15:48:08.812374 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:48:08 crc kubenswrapper[4731]: E1125 15:48:08.813207 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:48:20 crc kubenswrapper[4731]: I1125 15:48:20.811620 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:48:20 crc kubenswrapper[4731]: E1125 15:48:20.812464 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:48:32 crc kubenswrapper[4731]: I1125 15:48:32.817313 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:48:32 crc kubenswrapper[4731]: E1125 15:48:32.818176 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:48:45 crc kubenswrapper[4731]: I1125 15:48:45.344949 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-75cd8d4b6c-kkqgj_089e8703-f71e-4f81-91af-19224ba7b1b4/manager/0.log" Nov 25 15:48:45 crc kubenswrapper[4731]: I1125 15:48:45.811251 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:48:45 crc kubenswrapper[4731]: E1125 15:48:45.811722 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.370153 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj"] Nov 25 15:48:59 crc kubenswrapper[4731]: E1125 15:48:59.370981 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64adc15c-bc40-4248-8e75-8dc89e3ad970" containerName="registry-server" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.370993 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="64adc15c-bc40-4248-8e75-8dc89e3ad970" containerName="registry-server" Nov 25 15:48:59 crc kubenswrapper[4731]: E1125 15:48:59.371020 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64adc15c-bc40-4248-8e75-8dc89e3ad970" containerName="extract-content" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.371028 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="64adc15c-bc40-4248-8e75-8dc89e3ad970" containerName="extract-content" Nov 25 15:48:59 crc kubenswrapper[4731]: E1125 15:48:59.371043 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64adc15c-bc40-4248-8e75-8dc89e3ad970" containerName="extract-utilities" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.371049 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="64adc15c-bc40-4248-8e75-8dc89e3ad970" containerName="extract-utilities" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.371244 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="64adc15c-bc40-4248-8e75-8dc89e3ad970" containerName="registry-server" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.372737 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.376000 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.383721 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj"] Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.472820 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26eed3e4-1783-4718-b1a2-39ba37a78a12-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj\" (UID: \"26eed3e4-1783-4718-b1a2-39ba37a78a12\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.473200 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rlph\" (UniqueName: \"kubernetes.io/projected/26eed3e4-1783-4718-b1a2-39ba37a78a12-kube-api-access-8rlph\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj\" (UID: \"26eed3e4-1783-4718-b1a2-39ba37a78a12\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.473238 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26eed3e4-1783-4718-b1a2-39ba37a78a12-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj\" (UID: \"26eed3e4-1783-4718-b1a2-39ba37a78a12\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.574804 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26eed3e4-1783-4718-b1a2-39ba37a78a12-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj\" (UID: \"26eed3e4-1783-4718-b1a2-39ba37a78a12\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.575151 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rlph\" (UniqueName: \"kubernetes.io/projected/26eed3e4-1783-4718-b1a2-39ba37a78a12-kube-api-access-8rlph\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj\" (UID: \"26eed3e4-1783-4718-b1a2-39ba37a78a12\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.575261 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26eed3e4-1783-4718-b1a2-39ba37a78a12-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj\" (UID: \"26eed3e4-1783-4718-b1a2-39ba37a78a12\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.575299 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26eed3e4-1783-4718-b1a2-39ba37a78a12-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj\" (UID: \"26eed3e4-1783-4718-b1a2-39ba37a78a12\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.575499 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26eed3e4-1783-4718-b1a2-39ba37a78a12-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj\" (UID: \"26eed3e4-1783-4718-b1a2-39ba37a78a12\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.592844 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rlph\" (UniqueName: \"kubernetes.io/projected/26eed3e4-1783-4718-b1a2-39ba37a78a12-kube-api-access-8rlph\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj\" (UID: \"26eed3e4-1783-4718-b1a2-39ba37a78a12\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" Nov 25 15:48:59 crc kubenswrapper[4731]: I1125 15:48:59.709938 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" Nov 25 15:49:00 crc kubenswrapper[4731]: I1125 15:49:00.173039 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj"] Nov 25 15:49:00 crc kubenswrapper[4731]: I1125 15:49:00.812057 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:49:00 crc kubenswrapper[4731]: I1125 15:49:00.943800 4731 generic.go:334] "Generic (PLEG): container finished" podID="26eed3e4-1783-4718-b1a2-39ba37a78a12" containerID="ca586d3e9b1644be3c183cb0d86374526c7ccbdea71da2fdfd2c6c42880ce83e" exitCode=0 Nov 25 15:49:00 crc kubenswrapper[4731]: I1125 15:49:00.943853 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" event={"ID":"26eed3e4-1783-4718-b1a2-39ba37a78a12","Type":"ContainerDied","Data":"ca586d3e9b1644be3c183cb0d86374526c7ccbdea71da2fdfd2c6c42880ce83e"} Nov 25 15:49:00 crc kubenswrapper[4731]: I1125 15:49:00.943875 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" event={"ID":"26eed3e4-1783-4718-b1a2-39ba37a78a12","Type":"ContainerStarted","Data":"834719b02779911abe974fd536332ca6662828c7debd7a4c1910dcfefd8eaf96"} Nov 25 15:49:00 crc kubenswrapper[4731]: I1125 15:49:00.945459 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:49:01 crc kubenswrapper[4731]: I1125 15:49:01.709232 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ng695"] Nov 25 15:49:01 crc kubenswrapper[4731]: I1125 15:49:01.712012 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:01 crc kubenswrapper[4731]: I1125 15:49:01.732338 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ng695"] Nov 25 15:49:01 crc kubenswrapper[4731]: I1125 15:49:01.822769 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-catalog-content\") pod \"redhat-operators-ng695\" (UID: \"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0\") " pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:01 crc kubenswrapper[4731]: I1125 15:49:01.822934 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4jlp\" (UniqueName: \"kubernetes.io/projected/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-kube-api-access-s4jlp\") pod \"redhat-operators-ng695\" (UID: \"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0\") " pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:01 crc kubenswrapper[4731]: I1125 15:49:01.823095 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-utilities\") pod \"redhat-operators-ng695\" (UID: \"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0\") " pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:01 crc kubenswrapper[4731]: I1125 15:49:01.924648 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4jlp\" (UniqueName: \"kubernetes.io/projected/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-kube-api-access-s4jlp\") pod \"redhat-operators-ng695\" (UID: \"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0\") " pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:01 crc kubenswrapper[4731]: I1125 15:49:01.924843 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-utilities\") pod \"redhat-operators-ng695\" (UID: \"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0\") " pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:01 crc kubenswrapper[4731]: I1125 15:49:01.924882 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-catalog-content\") pod \"redhat-operators-ng695\" (UID: \"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0\") " pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:01 crc kubenswrapper[4731]: I1125 15:49:01.925453 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-catalog-content\") pod \"redhat-operators-ng695\" (UID: \"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0\") " pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:01 crc kubenswrapper[4731]: I1125 15:49:01.925739 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-utilities\") pod \"redhat-operators-ng695\" (UID: \"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0\") " pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:01 crc kubenswrapper[4731]: I1125 15:49:01.956758 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4jlp\" (UniqueName: \"kubernetes.io/projected/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-kube-api-access-s4jlp\") pod \"redhat-operators-ng695\" (UID: \"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0\") " pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:01 crc kubenswrapper[4731]: I1125 15:49:01.962453 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"b2f9e4670024f055588fd4f90acc9b51e64b4b08f50f884f4bf445bf1c3e24ba"} Nov 25 15:49:02 crc kubenswrapper[4731]: I1125 15:49:02.034174 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:02 crc kubenswrapper[4731]: W1125 15:49:02.825316 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f48b2f1_207e_40ad_a10d_b99abf8c1eb0.slice/crio-58ded8a2607823f90087680288f85546c2a9b5ab9af61186434d269445a8322b WatchSource:0}: Error finding container 58ded8a2607823f90087680288f85546c2a9b5ab9af61186434d269445a8322b: Status 404 returned error can't find the container with id 58ded8a2607823f90087680288f85546c2a9b5ab9af61186434d269445a8322b Nov 25 15:49:02 crc kubenswrapper[4731]: I1125 15:49:02.834640 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ng695"] Nov 25 15:49:02 crc kubenswrapper[4731]: I1125 15:49:02.976733 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng695" event={"ID":"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0","Type":"ContainerStarted","Data":"58ded8a2607823f90087680288f85546c2a9b5ab9af61186434d269445a8322b"} Nov 25 15:49:02 crc kubenswrapper[4731]: I1125 15:49:02.985264 4731 generic.go:334] "Generic (PLEG): container finished" podID="26eed3e4-1783-4718-b1a2-39ba37a78a12" containerID="2fe7dadcc9e3bff12983f72a3022b3204a492848680287f5e1634b41e79ad88c" exitCode=0 Nov 25 15:49:02 crc kubenswrapper[4731]: I1125 15:49:02.985311 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" event={"ID":"26eed3e4-1783-4718-b1a2-39ba37a78a12","Type":"ContainerDied","Data":"2fe7dadcc9e3bff12983f72a3022b3204a492848680287f5e1634b41e79ad88c"} Nov 25 15:49:03 crc kubenswrapper[4731]: I1125 15:49:03.995181 4731 generic.go:334] "Generic (PLEG): container finished" podID="7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" containerID="9c7ceeeef694ab21ae7e1942b43b4abcd8b801872d3b0e7f4427fce1ac1d6aea" exitCode=0 Nov 25 15:49:03 crc kubenswrapper[4731]: I1125 15:49:03.995243 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng695" event={"ID":"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0","Type":"ContainerDied","Data":"9c7ceeeef694ab21ae7e1942b43b4abcd8b801872d3b0e7f4427fce1ac1d6aea"} Nov 25 15:49:04 crc kubenswrapper[4731]: I1125 15:49:04.002714 4731 generic.go:334] "Generic (PLEG): container finished" podID="26eed3e4-1783-4718-b1a2-39ba37a78a12" containerID="f76c2b815da05cde527a95ab91895d045803223302a1174f40ddd2911c438d97" exitCode=0 Nov 25 15:49:04 crc kubenswrapper[4731]: I1125 15:49:04.002762 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" event={"ID":"26eed3e4-1783-4718-b1a2-39ba37a78a12","Type":"ContainerDied","Data":"f76c2b815da05cde527a95ab91895d045803223302a1174f40ddd2911c438d97"} Nov 25 15:49:05 crc kubenswrapper[4731]: I1125 15:49:05.013515 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng695" event={"ID":"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0","Type":"ContainerStarted","Data":"4c16c38d624e8b270ed6193d7738fa01906b9ac9cf1086884e2375169efcf1dd"} Nov 25 15:49:05 crc kubenswrapper[4731]: I1125 15:49:05.376692 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" Nov 25 15:49:05 crc kubenswrapper[4731]: I1125 15:49:05.521311 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26eed3e4-1783-4718-b1a2-39ba37a78a12-util\") pod \"26eed3e4-1783-4718-b1a2-39ba37a78a12\" (UID: \"26eed3e4-1783-4718-b1a2-39ba37a78a12\") " Nov 25 15:49:05 crc kubenswrapper[4731]: I1125 15:49:05.521772 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26eed3e4-1783-4718-b1a2-39ba37a78a12-bundle\") pod \"26eed3e4-1783-4718-b1a2-39ba37a78a12\" (UID: \"26eed3e4-1783-4718-b1a2-39ba37a78a12\") " Nov 25 15:49:05 crc kubenswrapper[4731]: I1125 15:49:05.522037 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rlph\" (UniqueName: \"kubernetes.io/projected/26eed3e4-1783-4718-b1a2-39ba37a78a12-kube-api-access-8rlph\") pod \"26eed3e4-1783-4718-b1a2-39ba37a78a12\" (UID: \"26eed3e4-1783-4718-b1a2-39ba37a78a12\") " Nov 25 15:49:05 crc kubenswrapper[4731]: I1125 15:49:05.523852 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26eed3e4-1783-4718-b1a2-39ba37a78a12-bundle" (OuterVolumeSpecName: "bundle") pod "26eed3e4-1783-4718-b1a2-39ba37a78a12" (UID: "26eed3e4-1783-4718-b1a2-39ba37a78a12"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:49:05 crc kubenswrapper[4731]: I1125 15:49:05.625946 4731 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/26eed3e4-1783-4718-b1a2-39ba37a78a12-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:49:05 crc kubenswrapper[4731]: I1125 15:49:05.681377 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26eed3e4-1783-4718-b1a2-39ba37a78a12-kube-api-access-8rlph" (OuterVolumeSpecName: "kube-api-access-8rlph") pod "26eed3e4-1783-4718-b1a2-39ba37a78a12" (UID: "26eed3e4-1783-4718-b1a2-39ba37a78a12"). InnerVolumeSpecName "kube-api-access-8rlph". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:49:05 crc kubenswrapper[4731]: I1125 15:49:05.728759 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rlph\" (UniqueName: \"kubernetes.io/projected/26eed3e4-1783-4718-b1a2-39ba37a78a12-kube-api-access-8rlph\") on node \"crc\" DevicePath \"\"" Nov 25 15:49:05 crc kubenswrapper[4731]: I1125 15:49:05.968584 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26eed3e4-1783-4718-b1a2-39ba37a78a12-util" (OuterVolumeSpecName: "util") pod "26eed3e4-1783-4718-b1a2-39ba37a78a12" (UID: "26eed3e4-1783-4718-b1a2-39ba37a78a12"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:49:06 crc kubenswrapper[4731]: I1125 15:49:06.037482 4731 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/26eed3e4-1783-4718-b1a2-39ba37a78a12-util\") on node \"crc\" DevicePath \"\"" Nov 25 15:49:06 crc kubenswrapper[4731]: I1125 15:49:06.047988 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" Nov 25 15:49:06 crc kubenswrapper[4731]: I1125 15:49:06.051554 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj" event={"ID":"26eed3e4-1783-4718-b1a2-39ba37a78a12","Type":"ContainerDied","Data":"834719b02779911abe974fd536332ca6662828c7debd7a4c1910dcfefd8eaf96"} Nov 25 15:49:06 crc kubenswrapper[4731]: I1125 15:49:06.051657 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="834719b02779911abe974fd536332ca6662828c7debd7a4c1910dcfefd8eaf96" Nov 25 15:49:12 crc kubenswrapper[4731]: I1125 15:49:12.102000 4731 generic.go:334] "Generic (PLEG): container finished" podID="7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" containerID="4c16c38d624e8b270ed6193d7738fa01906b9ac9cf1086884e2375169efcf1dd" exitCode=0 Nov 25 15:49:12 crc kubenswrapper[4731]: I1125 15:49:12.102076 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng695" event={"ID":"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0","Type":"ContainerDied","Data":"4c16c38d624e8b270ed6193d7738fa01906b9ac9cf1086884e2375169efcf1dd"} Nov 25 15:49:13 crc kubenswrapper[4731]: I1125 15:49:13.114535 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng695" event={"ID":"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0","Type":"ContainerStarted","Data":"8724e037d3e0b61f61b36f4fb35fbe8dbb19496e289e860610ad78616659512d"} Nov 25 15:49:13 crc kubenswrapper[4731]: I1125 15:49:13.162617 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ng695" podStartSLOduration=3.362430409 podStartE2EDuration="12.162591115s" podCreationTimestamp="2025-11-25 15:49:01 +0000 UTC" firstStartedPulling="2025-11-25 15:49:03.99784291 +0000 UTC m=+3302.744977340" lastFinishedPulling="2025-11-25 15:49:12.798003616 +0000 UTC m=+3311.545138046" observedRunningTime="2025-11-25 15:49:13.153336245 +0000 UTC m=+3311.900470675" watchObservedRunningTime="2025-11-25 15:49:13.162591115 +0000 UTC m=+3311.909725545" Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.751393 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-2vtlt"] Nov 25 15:49:16 crc kubenswrapper[4731]: E1125 15:49:16.752305 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26eed3e4-1783-4718-b1a2-39ba37a78a12" containerName="pull" Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.752318 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="26eed3e4-1783-4718-b1a2-39ba37a78a12" containerName="pull" Nov 25 15:49:16 crc kubenswrapper[4731]: E1125 15:49:16.752354 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26eed3e4-1783-4718-b1a2-39ba37a78a12" containerName="util" Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.752360 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="26eed3e4-1783-4718-b1a2-39ba37a78a12" containerName="util" Nov 25 15:49:16 crc kubenswrapper[4731]: E1125 15:49:16.752377 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26eed3e4-1783-4718-b1a2-39ba37a78a12" containerName="extract" Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.752385 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="26eed3e4-1783-4718-b1a2-39ba37a78a12" containerName="extract" Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.752551 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="26eed3e4-1783-4718-b1a2-39ba37a78a12" containerName="extract" Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.754972 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2vtlt" Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.757988 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.758131 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.758236 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-cqm4b" Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.781986 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-2vtlt"] Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.883758 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml9q5\" (UniqueName: \"kubernetes.io/projected/cbf9673f-5c6b-41d7-8eba-cdc0414a3910-kube-api-access-ml9q5\") pod \"obo-prometheus-operator-668cf9dfbb-2vtlt\" (UID: \"cbf9673f-5c6b-41d7-8eba-cdc0414a3910\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2vtlt" Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.899919 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-wdpx7"] Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.901329 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-wdpx7" Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.908225 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.908361 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-k6wg4" Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.918378 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-wdpx7"] Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.934599 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-6x8xb"] Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.935863 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-6x8xb" Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.985167 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml9q5\" (UniqueName: \"kubernetes.io/projected/cbf9673f-5c6b-41d7-8eba-cdc0414a3910-kube-api-access-ml9q5\") pod \"obo-prometheus-operator-668cf9dfbb-2vtlt\" (UID: \"cbf9673f-5c6b-41d7-8eba-cdc0414a3910\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2vtlt" Nov 25 15:49:16 crc kubenswrapper[4731]: I1125 15:49:16.992612 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-6x8xb"] Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.029457 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml9q5\" (UniqueName: \"kubernetes.io/projected/cbf9673f-5c6b-41d7-8eba-cdc0414a3910-kube-api-access-ml9q5\") pod \"obo-prometheus-operator-668cf9dfbb-2vtlt\" (UID: \"cbf9673f-5c6b-41d7-8eba-cdc0414a3910\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2vtlt" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.064289 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-9s4pj"] Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.065953 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-9s4pj" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.069041 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-rjtf2" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.069279 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.079847 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-9s4pj"] Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.085243 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2vtlt" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.095434 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/900460b9-59fd-4456-925b-ed34b683ac09-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69fb756767-wdpx7\" (UID: \"900460b9-59fd-4456-925b-ed34b683ac09\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-wdpx7" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.095502 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/900460b9-59fd-4456-925b-ed34b683ac09-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69fb756767-wdpx7\" (UID: \"900460b9-59fd-4456-925b-ed34b683ac09\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-wdpx7" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.095582 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/236a458e-0795-42ad-8dab-84407618f96d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69fb756767-6x8xb\" (UID: \"236a458e-0795-42ad-8dab-84407618f96d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-6x8xb" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.095603 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/236a458e-0795-42ad-8dab-84407618f96d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69fb756767-6x8xb\" (UID: \"236a458e-0795-42ad-8dab-84407618f96d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-6x8xb" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.199131 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/900460b9-59fd-4456-925b-ed34b683ac09-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69fb756767-wdpx7\" (UID: \"900460b9-59fd-4456-925b-ed34b683ac09\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-wdpx7" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.199187 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/900460b9-59fd-4456-925b-ed34b683ac09-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69fb756767-wdpx7\" (UID: \"900460b9-59fd-4456-925b-ed34b683ac09\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-wdpx7" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.199235 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/60ec713e-70e1-4fbf-b011-b6e4bcf87539-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-9s4pj\" (UID: \"60ec713e-70e1-4fbf-b011-b6e4bcf87539\") " pod="openshift-operators/observability-operator-d8bb48f5d-9s4pj" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.199303 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/236a458e-0795-42ad-8dab-84407618f96d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69fb756767-6x8xb\" (UID: \"236a458e-0795-42ad-8dab-84407618f96d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-6x8xb" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.199337 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/236a458e-0795-42ad-8dab-84407618f96d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69fb756767-6x8xb\" (UID: \"236a458e-0795-42ad-8dab-84407618f96d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-6x8xb" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.199378 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjh2p\" (UniqueName: \"kubernetes.io/projected/60ec713e-70e1-4fbf-b011-b6e4bcf87539-kube-api-access-jjh2p\") pod \"observability-operator-d8bb48f5d-9s4pj\" (UID: \"60ec713e-70e1-4fbf-b011-b6e4bcf87539\") " pod="openshift-operators/observability-operator-d8bb48f5d-9s4pj" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.204527 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/236a458e-0795-42ad-8dab-84407618f96d-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69fb756767-6x8xb\" (UID: \"236a458e-0795-42ad-8dab-84407618f96d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-6x8xb" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.209993 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/900460b9-59fd-4456-925b-ed34b683ac09-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69fb756767-wdpx7\" (UID: \"900460b9-59fd-4456-925b-ed34b683ac09\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-wdpx7" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.210320 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/236a458e-0795-42ad-8dab-84407618f96d-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-69fb756767-6x8xb\" (UID: \"236a458e-0795-42ad-8dab-84407618f96d\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-6x8xb" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.214522 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/900460b9-59fd-4456-925b-ed34b683ac09-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-69fb756767-wdpx7\" (UID: \"900460b9-59fd-4456-925b-ed34b683ac09\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-wdpx7" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.221360 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-wdpx7" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.268272 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-6x8xb" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.290972 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-qz72z"] Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.292333 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-qz72z" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.300759 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/60ec713e-70e1-4fbf-b011-b6e4bcf87539-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-9s4pj\" (UID: \"60ec713e-70e1-4fbf-b011-b6e4bcf87539\") " pod="openshift-operators/observability-operator-d8bb48f5d-9s4pj" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.300854 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjh2p\" (UniqueName: \"kubernetes.io/projected/60ec713e-70e1-4fbf-b011-b6e4bcf87539-kube-api-access-jjh2p\") pod \"observability-operator-d8bb48f5d-9s4pj\" (UID: \"60ec713e-70e1-4fbf-b011-b6e4bcf87539\") " pod="openshift-operators/observability-operator-d8bb48f5d-9s4pj" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.306482 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-p4bzv" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.314024 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/60ec713e-70e1-4fbf-b011-b6e4bcf87539-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-9s4pj\" (UID: \"60ec713e-70e1-4fbf-b011-b6e4bcf87539\") " pod="openshift-operators/observability-operator-d8bb48f5d-9s4pj" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.324113 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjh2p\" (UniqueName: \"kubernetes.io/projected/60ec713e-70e1-4fbf-b011-b6e4bcf87539-kube-api-access-jjh2p\") pod \"observability-operator-d8bb48f5d-9s4pj\" (UID: \"60ec713e-70e1-4fbf-b011-b6e4bcf87539\") " pod="openshift-operators/observability-operator-d8bb48f5d-9s4pj" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.329552 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-qz72z"] Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.407989 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bfcf5cee-d085-4fb4-8144-9a38143246b4-openshift-service-ca\") pod \"perses-operator-5446b9c989-qz72z\" (UID: \"bfcf5cee-d085-4fb4-8144-9a38143246b4\") " pod="openshift-operators/perses-operator-5446b9c989-qz72z" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.408226 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzxbm\" (UniqueName: \"kubernetes.io/projected/bfcf5cee-d085-4fb4-8144-9a38143246b4-kube-api-access-kzxbm\") pod \"perses-operator-5446b9c989-qz72z\" (UID: \"bfcf5cee-d085-4fb4-8144-9a38143246b4\") " pod="openshift-operators/perses-operator-5446b9c989-qz72z" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.492689 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-9s4pj" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.510442 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bfcf5cee-d085-4fb4-8144-9a38143246b4-openshift-service-ca\") pod \"perses-operator-5446b9c989-qz72z\" (UID: \"bfcf5cee-d085-4fb4-8144-9a38143246b4\") " pod="openshift-operators/perses-operator-5446b9c989-qz72z" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.510650 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzxbm\" (UniqueName: \"kubernetes.io/projected/bfcf5cee-d085-4fb4-8144-9a38143246b4-kube-api-access-kzxbm\") pod \"perses-operator-5446b9c989-qz72z\" (UID: \"bfcf5cee-d085-4fb4-8144-9a38143246b4\") " pod="openshift-operators/perses-operator-5446b9c989-qz72z" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.511959 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bfcf5cee-d085-4fb4-8144-9a38143246b4-openshift-service-ca\") pod \"perses-operator-5446b9c989-qz72z\" (UID: \"bfcf5cee-d085-4fb4-8144-9a38143246b4\") " pod="openshift-operators/perses-operator-5446b9c989-qz72z" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.546718 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzxbm\" (UniqueName: \"kubernetes.io/projected/bfcf5cee-d085-4fb4-8144-9a38143246b4-kube-api-access-kzxbm\") pod \"perses-operator-5446b9c989-qz72z\" (UID: \"bfcf5cee-d085-4fb4-8144-9a38143246b4\") " pod="openshift-operators/perses-operator-5446b9c989-qz72z" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.696726 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-qz72z" Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.756214 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-2vtlt"] Nov 25 15:49:17 crc kubenswrapper[4731]: W1125 15:49:17.825170 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbf9673f_5c6b_41d7_8eba_cdc0414a3910.slice/crio-5b895caf9f812cb828dd5cc41280c98fe69adeeeb316f7a2c60a6d56e28cdf29 WatchSource:0}: Error finding container 5b895caf9f812cb828dd5cc41280c98fe69adeeeb316f7a2c60a6d56e28cdf29: Status 404 returned error can't find the container with id 5b895caf9f812cb828dd5cc41280c98fe69adeeeb316f7a2c60a6d56e28cdf29 Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.857515 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-wdpx7"] Nov 25 15:49:17 crc kubenswrapper[4731]: I1125 15:49:17.945275 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-6x8xb"] Nov 25 15:49:18 crc kubenswrapper[4731]: W1125 15:49:18.050283 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60ec713e_70e1_4fbf_b011_b6e4bcf87539.slice/crio-a879fbd077ff2da8858dd858a6f7cafcdef802bbed5e0ecdeac9570cedcaf182 WatchSource:0}: Error finding container a879fbd077ff2da8858dd858a6f7cafcdef802bbed5e0ecdeac9570cedcaf182: Status 404 returned error can't find the container with id a879fbd077ff2da8858dd858a6f7cafcdef802bbed5e0ecdeac9570cedcaf182 Nov 25 15:49:18 crc kubenswrapper[4731]: I1125 15:49:18.063108 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-9s4pj"] Nov 25 15:49:18 crc kubenswrapper[4731]: I1125 15:49:18.284100 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-6x8xb" event={"ID":"236a458e-0795-42ad-8dab-84407618f96d","Type":"ContainerStarted","Data":"7a6720cbd2de9ef9997ee28a36031d0d5b3a6aa0e8a1e8ed6f1cd3607a4fbf75"} Nov 25 15:49:18 crc kubenswrapper[4731]: I1125 15:49:18.320081 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2vtlt" event={"ID":"cbf9673f-5c6b-41d7-8eba-cdc0414a3910","Type":"ContainerStarted","Data":"5b895caf9f812cb828dd5cc41280c98fe69adeeeb316f7a2c60a6d56e28cdf29"} Nov 25 15:49:18 crc kubenswrapper[4731]: I1125 15:49:18.339337 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-9s4pj" event={"ID":"60ec713e-70e1-4fbf-b011-b6e4bcf87539","Type":"ContainerStarted","Data":"a879fbd077ff2da8858dd858a6f7cafcdef802bbed5e0ecdeac9570cedcaf182"} Nov 25 15:49:18 crc kubenswrapper[4731]: I1125 15:49:18.353977 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-wdpx7" event={"ID":"900460b9-59fd-4456-925b-ed34b683ac09","Type":"ContainerStarted","Data":"331aed3880f470cd1b980826c8f7ed34f229192ce83cad57121c6a5c3b6ef8f8"} Nov 25 15:49:18 crc kubenswrapper[4731]: I1125 15:49:18.401855 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-qz72z"] Nov 25 15:49:19 crc kubenswrapper[4731]: I1125 15:49:19.370302 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-qz72z" event={"ID":"bfcf5cee-d085-4fb4-8144-9a38143246b4","Type":"ContainerStarted","Data":"e47921c4237ec55b8461b20980a87654d943cc35416bb9e2c105914f0458747e"} Nov 25 15:49:22 crc kubenswrapper[4731]: I1125 15:49:22.036222 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:22 crc kubenswrapper[4731]: I1125 15:49:22.036754 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:23 crc kubenswrapper[4731]: I1125 15:49:23.118699 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ng695" podUID="7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" containerName="registry-server" probeResult="failure" output=< Nov 25 15:49:23 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 15:49:23 crc kubenswrapper[4731]: > Nov 25 15:49:28 crc kubenswrapper[4731]: I1125 15:49:28.305366 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4pq5p"] Nov 25 15:49:28 crc kubenswrapper[4731]: I1125 15:49:28.308218 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:49:28 crc kubenswrapper[4731]: I1125 15:49:28.321694 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4pq5p"] Nov 25 15:49:28 crc kubenswrapper[4731]: I1125 15:49:28.494114 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1848bcec-6275-4c4c-a4da-954d9798b9a8-catalog-content\") pod \"community-operators-4pq5p\" (UID: \"1848bcec-6275-4c4c-a4da-954d9798b9a8\") " pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:49:28 crc kubenswrapper[4731]: I1125 15:49:28.494169 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q775v\" (UniqueName: \"kubernetes.io/projected/1848bcec-6275-4c4c-a4da-954d9798b9a8-kube-api-access-q775v\") pod \"community-operators-4pq5p\" (UID: \"1848bcec-6275-4c4c-a4da-954d9798b9a8\") " pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:49:28 crc kubenswrapper[4731]: I1125 15:49:28.494217 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1848bcec-6275-4c4c-a4da-954d9798b9a8-utilities\") pod \"community-operators-4pq5p\" (UID: \"1848bcec-6275-4c4c-a4da-954d9798b9a8\") " pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:49:28 crc kubenswrapper[4731]: I1125 15:49:28.596270 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1848bcec-6275-4c4c-a4da-954d9798b9a8-utilities\") pod \"community-operators-4pq5p\" (UID: \"1848bcec-6275-4c4c-a4da-954d9798b9a8\") " pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:49:28 crc kubenswrapper[4731]: I1125 15:49:28.596482 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1848bcec-6275-4c4c-a4da-954d9798b9a8-catalog-content\") pod \"community-operators-4pq5p\" (UID: \"1848bcec-6275-4c4c-a4da-954d9798b9a8\") " pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:49:28 crc kubenswrapper[4731]: I1125 15:49:28.596510 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q775v\" (UniqueName: \"kubernetes.io/projected/1848bcec-6275-4c4c-a4da-954d9798b9a8-kube-api-access-q775v\") pod \"community-operators-4pq5p\" (UID: \"1848bcec-6275-4c4c-a4da-954d9798b9a8\") " pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:49:28 crc kubenswrapper[4731]: I1125 15:49:28.596861 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1848bcec-6275-4c4c-a4da-954d9798b9a8-utilities\") pod \"community-operators-4pq5p\" (UID: \"1848bcec-6275-4c4c-a4da-954d9798b9a8\") " pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:49:28 crc kubenswrapper[4731]: I1125 15:49:28.597052 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1848bcec-6275-4c4c-a4da-954d9798b9a8-catalog-content\") pod \"community-operators-4pq5p\" (UID: \"1848bcec-6275-4c4c-a4da-954d9798b9a8\") " pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:49:28 crc kubenswrapper[4731]: I1125 15:49:28.621971 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q775v\" (UniqueName: \"kubernetes.io/projected/1848bcec-6275-4c4c-a4da-954d9798b9a8-kube-api-access-q775v\") pod \"community-operators-4pq5p\" (UID: \"1848bcec-6275-4c4c-a4da-954d9798b9a8\") " pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:49:28 crc kubenswrapper[4731]: I1125 15:49:28.630400 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:49:33 crc kubenswrapper[4731]: I1125 15:49:33.085223 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ng695" podUID="7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" containerName="registry-server" probeResult="failure" output=< Nov 25 15:49:33 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 15:49:33 crc kubenswrapper[4731]: > Nov 25 15:49:38 crc kubenswrapper[4731]: E1125 15:49:38.606588 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3" Nov 25 15:49:38 crc kubenswrapper[4731]: E1125 15:49:38.607264 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:1133c973c7472c665f910a722e19c8e2e27accb34b90fab67f14548627ce9c62,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ml9q5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-668cf9dfbb-2vtlt_openshift-operators(cbf9673f-5c6b-41d7-8eba-cdc0414a3910): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 15:49:38 crc kubenswrapper[4731]: E1125 15:49:38.608995 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2vtlt" podUID="cbf9673f-5c6b-41d7-8eba-cdc0414a3910" Nov 25 15:49:39 crc kubenswrapper[4731]: E1125 15:49:39.347131 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Nov 25 15:49:39 crc kubenswrapper[4731]: E1125 15:49:39.347966 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-69fb756767-6x8xb_openshift-operators(236a458e-0795-42ad-8dab-84407618f96d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 15:49:39 crc kubenswrapper[4731]: E1125 15:49:39.349250 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-6x8xb" podUID="236a458e-0795-42ad-8dab-84407618f96d" Nov 25 15:49:39 crc kubenswrapper[4731]: E1125 15:49:39.632439 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:203cf5b9dc1460f09e75f58d8b5cf7df5e57c18c8c6a41c14b5e8977d83263f3\\\"\"" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2vtlt" podUID="cbf9673f-5c6b-41d7-8eba-cdc0414a3910" Nov 25 15:49:39 crc kubenswrapper[4731]: E1125 15:49:39.632476 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-6x8xb" podUID="236a458e-0795-42ad-8dab-84407618f96d" Nov 25 15:49:43 crc kubenswrapper[4731]: I1125 15:49:43.091592 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ng695" podUID="7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" containerName="registry-server" probeResult="failure" output=< Nov 25 15:49:43 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 15:49:43 crc kubenswrapper[4731]: > Nov 25 15:49:43 crc kubenswrapper[4731]: E1125 15:49:43.680492 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb" Nov 25 15:49:43 crc kubenswrapper[4731]: E1125 15:49:43.680925 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb,Command:[],Args:[--namespace=$(NAMESPACE) --images=perses=$(RELATED_IMAGE_PERSES) --images=alertmanager=$(RELATED_IMAGE_ALERTMANAGER) --images=prometheus=$(RELATED_IMAGE_PROMETHEUS) --images=thanos=$(RELATED_IMAGE_THANOS) --images=ui-dashboards=$(RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN) --images=ui-distributed-tracing=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN) --images=ui-distributed-tracing-pf5=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5) --images=ui-distributed-tracing-pf4=$(RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4) --images=ui-logging=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN) --images=ui-logging-pf4=$(RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4) --images=ui-troubleshooting-panel=$(RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN) --images=ui-monitoring=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN) --images=ui-monitoring-pf5=$(RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5) --images=korrel8r=$(RELATED_IMAGE_KORREL8R) --images=health-analyzer=$(RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER) --openshift.enabled=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:RELATED_IMAGE_ALERTMANAGER,Value:registry.redhat.io/cluster-observability-operator/alertmanager-rhel9@sha256:e718854a7d6ca8accf0fa72db0eb902e46c44d747ad51dc3f06bba0cefaa3c01,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS,Value:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:17ea20be390a94ab39f5cdd7f0cbc2498046eebcf77fe3dec9aa288d5c2cf46b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_THANOS,Value:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:d972f4faa5e9c121402d23ed85002f26af48ec36b1b71a7489d677b3913d08b4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PERSES,Value:registry.redhat.io/cluster-observability-operator/perses-rhel9@sha256:91531137fc1dcd740e277e0f65e120a0176a16f788c14c27925b61aa0b792ade,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DASHBOARDS_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/dashboards-console-plugin-rhel9@sha256:a69da8bbca8a28dd2925f864d51cc31cf761b10532c553095ba40b242ef701cb,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-rhel9@sha256:897e1bfad1187062725b54d87107bd0155972257a50d8335dd29e1999b828a4f,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf5-rhel9@sha256:95fe5b5746ca8c07ac9217ce2d8ac8e6afad17af210f9d8e0074df1310b209a8,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_DISTRIBUTED_TRACING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/distributed-tracing-console-plugin-pf4-rhel9@sha256:e9d9a89e4d8126a62b1852055482258ee528cac6398dd5d43ebad75ace0f33c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-rhel9@sha256:ec684a0645ceb917b019af7ddba68c3533416e356ab0d0320a30e75ca7ebb31b,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_LOGGING_PLUGIN_PF4,Value:registry.redhat.io/cluster-observability-operator/logging-console-plugin-pf4-rhel9@sha256:3b9693fcde9b3a9494fb04735b1f7cfd0426f10be820fdc3f024175c0d3df1c9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_TROUBLESHOOTING_PANEL_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/troubleshooting-panel-console-plugin-rhel9@sha256:580606f194180accc8abba099e17a26dca7522ec6d233fa2fdd40312771703e3,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-rhel9@sha256:e03777be39e71701935059cd877603874a13ac94daa73219d4e5e545599d78a9,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CONSOLE_MONITORING_PLUGIN_PF5,Value:registry.redhat.io/cluster-observability-operator/monitoring-console-plugin-pf5-rhel9@sha256:aa47256193cfd2877853878e1ae97d2ab8b8e5deae62b387cbfad02b284d379c,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KORREL8R,Value:registry.redhat.io/cluster-observability-operator/korrel8r-rhel9@sha256:c595ff56b2cb85514bf4784db6ddb82e4e657e3e708a7fb695fc4997379a94d4,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLUSTER_HEALTH_ANALYZER,Value:registry.redhat.io/cluster-observability-operator/cluster-health-analyzer-rhel9@sha256:45a4ec2a519bcec99e886aa91596d5356a2414a2bd103baaef9fa7838c672eb2,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{400 -3} {} 400m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:observability-operator-tls,ReadOnly:true,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jjh2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod observability-operator-d8bb48f5d-9s4pj_openshift-operators(60ec713e-70e1-4fbf-b011-b6e4bcf87539): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 15:49:43 crc kubenswrapper[4731]: E1125 15:49:43.682149 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/observability-operator-d8bb48f5d-9s4pj" podUID="60ec713e-70e1-4fbf-b011-b6e4bcf87539" Nov 25 15:49:43 crc kubenswrapper[4731]: I1125 15:49:43.988314 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 25 15:49:43 crc kubenswrapper[4731]: I1125 15:49:43.988626 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerName="aodh-api" containerID="cri-o://3132c7b7bd744f60e07edef53dc0d43c075d211490acb44a6653d9efe3c7f0c3" gracePeriod=30 Nov 25 15:49:43 crc kubenswrapper[4731]: I1125 15:49:43.988666 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerName="aodh-notifier" containerID="cri-o://ab1012869a3ac8e7186a4f726c1f7b8efc2edc50a68924fbc07d6b2e7c2e4266" gracePeriod=30 Nov 25 15:49:43 crc kubenswrapper[4731]: I1125 15:49:43.988825 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerName="aodh-listener" containerID="cri-o://0e36718173a6401b91afa789ac9ee20a2ed4c986e2ec44aaed9ad7c2e9f04d9e" gracePeriod=30 Nov 25 15:49:43 crc kubenswrapper[4731]: I1125 15:49:43.988876 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerName="aodh-evaluator" containerID="cri-o://c8a8333e920150e83fd0bbc497123a972502a772a8d84239b8ce853975bec739" gracePeriod=30 Nov 25 15:49:44 crc kubenswrapper[4731]: E1125 15:49:44.465866 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Nov 25 15:49:44 crc kubenswrapper[4731]: E1125 15:49:44.466087 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-69fb756767-wdpx7_openshift-operators(900460b9-59fd-4456-925b-ed34b683ac09): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 15:49:44 crc kubenswrapper[4731]: E1125 15:49:44.467287 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-wdpx7" podUID="900460b9-59fd-4456-925b-ed34b683ac09" Nov 25 15:49:44 crc kubenswrapper[4731]: I1125 15:49:44.698126 4731 generic.go:334] "Generic (PLEG): container finished" podID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerID="c8a8333e920150e83fd0bbc497123a972502a772a8d84239b8ce853975bec739" exitCode=0 Nov 25 15:49:44 crc kubenswrapper[4731]: I1125 15:49:44.698170 4731 generic.go:334] "Generic (PLEG): container finished" podID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerID="3132c7b7bd744f60e07edef53dc0d43c075d211490acb44a6653d9efe3c7f0c3" exitCode=0 Nov 25 15:49:44 crc kubenswrapper[4731]: I1125 15:49:44.698216 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7a14477f-8284-44d8-8f6d-67f5efec9f52","Type":"ContainerDied","Data":"c8a8333e920150e83fd0bbc497123a972502a772a8d84239b8ce853975bec739"} Nov 25 15:49:44 crc kubenswrapper[4731]: I1125 15:49:44.698261 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7a14477f-8284-44d8-8f6d-67f5efec9f52","Type":"ContainerDied","Data":"3132c7b7bd744f60e07edef53dc0d43c075d211490acb44a6653d9efe3c7f0c3"} Nov 25 15:49:44 crc kubenswrapper[4731]: E1125 15:49:44.703190 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-wdpx7" podUID="900460b9-59fd-4456-925b-ed34b683ac09" Nov 25 15:49:44 crc kubenswrapper[4731]: E1125 15:49:44.703257 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/cluster-observability-rhel9-operator@sha256:ce7d2904f7b238aa37dfe74a0b76bf73629e7a14fa52bf54b0ecf030ca36f1bb\\\"\"" pod="openshift-operators/observability-operator-d8bb48f5d-9s4pj" podUID="60ec713e-70e1-4fbf-b011-b6e4bcf87539" Nov 25 15:49:45 crc kubenswrapper[4731]: E1125 15:49:45.099403 4731 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385" Nov 25 15:49:45 crc kubenswrapper[4731]: E1125 15:49:45.099976 4731 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:perses-operator,Image:registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{134217728 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openshift-service-ca,ReadOnly:true,MountPath:/ca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kzxbm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000350000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod perses-operator-5446b9c989-qz72z_openshift-operators(bfcf5cee-d085-4fb4-8144-9a38143246b4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Nov 25 15:49:45 crc kubenswrapper[4731]: E1125 15:49:45.104968 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/perses-operator-5446b9c989-qz72z" podUID="bfcf5cee-d085-4fb4-8144-9a38143246b4" Nov 25 15:49:45 crc kubenswrapper[4731]: I1125 15:49:45.525050 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4pq5p"] Nov 25 15:49:45 crc kubenswrapper[4731]: I1125 15:49:45.709778 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4pq5p" event={"ID":"1848bcec-6275-4c4c-a4da-954d9798b9a8","Type":"ContainerStarted","Data":"5dadc9c45a9dfbc52f0b9d132bbc26c0c9dd12f78679e8ceb5234795d329948a"} Nov 25 15:49:45 crc kubenswrapper[4731]: E1125 15:49:45.711486 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"perses-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/perses-rhel9-operator@sha256:9aec4c328ec43e40481e06ca5808deead74b75c0aacb90e9e72966c3fa14f385\\\"\"" pod="openshift-operators/perses-operator-5446b9c989-qz72z" podUID="bfcf5cee-d085-4fb4-8144-9a38143246b4" Nov 25 15:49:46 crc kubenswrapper[4731]: I1125 15:49:46.720331 4731 generic.go:334] "Generic (PLEG): container finished" podID="1848bcec-6275-4c4c-a4da-954d9798b9a8" containerID="94e991ec4605bc6d84c46d50e0e268370d8189549ef94e55caa14a78dae2d0a2" exitCode=0 Nov 25 15:49:46 crc kubenswrapper[4731]: I1125 15:49:46.720448 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4pq5p" event={"ID":"1848bcec-6275-4c4c-a4da-954d9798b9a8","Type":"ContainerDied","Data":"94e991ec4605bc6d84c46d50e0e268370d8189549ef94e55caa14a78dae2d0a2"} Nov 25 15:49:48 crc kubenswrapper[4731]: I1125 15:49:48.749468 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4pq5p" event={"ID":"1848bcec-6275-4c4c-a4da-954d9798b9a8","Type":"ContainerStarted","Data":"120f330f9c29b536a57112fa073a5cf3c95e8e1d94f4b75b844b9f6446201837"} Nov 25 15:49:49 crc kubenswrapper[4731]: I1125 15:49:49.763788 4731 generic.go:334] "Generic (PLEG): container finished" podID="1848bcec-6275-4c4c-a4da-954d9798b9a8" containerID="120f330f9c29b536a57112fa073a5cf3c95e8e1d94f4b75b844b9f6446201837" exitCode=0 Nov 25 15:49:49 crc kubenswrapper[4731]: I1125 15:49:49.763869 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4pq5p" event={"ID":"1848bcec-6275-4c4c-a4da-954d9798b9a8","Type":"ContainerDied","Data":"120f330f9c29b536a57112fa073a5cf3c95e8e1d94f4b75b844b9f6446201837"} Nov 25 15:49:49 crc kubenswrapper[4731]: I1125 15:49:49.771210 4731 generic.go:334] "Generic (PLEG): container finished" podID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerID="ab1012869a3ac8e7186a4f726c1f7b8efc2edc50a68924fbc07d6b2e7c2e4266" exitCode=0 Nov 25 15:49:49 crc kubenswrapper[4731]: I1125 15:49:49.771267 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7a14477f-8284-44d8-8f6d-67f5efec9f52","Type":"ContainerDied","Data":"ab1012869a3ac8e7186a4f726c1f7b8efc2edc50a68924fbc07d6b2e7c2e4266"} Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.372744 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.443876 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-public-tls-certs\") pod \"7a14477f-8284-44d8-8f6d-67f5efec9f52\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.443975 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-scripts\") pod \"7a14477f-8284-44d8-8f6d-67f5efec9f52\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.444053 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-internal-tls-certs\") pod \"7a14477f-8284-44d8-8f6d-67f5efec9f52\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.444212 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-config-data\") pod \"7a14477f-8284-44d8-8f6d-67f5efec9f52\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.444322 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-combined-ca-bundle\") pod \"7a14477f-8284-44d8-8f6d-67f5efec9f52\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.444385 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ht55p\" (UniqueName: \"kubernetes.io/projected/7a14477f-8284-44d8-8f6d-67f5efec9f52-kube-api-access-ht55p\") pod \"7a14477f-8284-44d8-8f6d-67f5efec9f52\" (UID: \"7a14477f-8284-44d8-8f6d-67f5efec9f52\") " Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.454070 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a14477f-8284-44d8-8f6d-67f5efec9f52-kube-api-access-ht55p" (OuterVolumeSpecName: "kube-api-access-ht55p") pod "7a14477f-8284-44d8-8f6d-67f5efec9f52" (UID: "7a14477f-8284-44d8-8f6d-67f5efec9f52"). InnerVolumeSpecName "kube-api-access-ht55p". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.454121 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-scripts" (OuterVolumeSpecName: "scripts") pod "7a14477f-8284-44d8-8f6d-67f5efec9f52" (UID: "7a14477f-8284-44d8-8f6d-67f5efec9f52"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.505010 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7a14477f-8284-44d8-8f6d-67f5efec9f52" (UID: "7a14477f-8284-44d8-8f6d-67f5efec9f52"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.531105 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7a14477f-8284-44d8-8f6d-67f5efec9f52" (UID: "7a14477f-8284-44d8-8f6d-67f5efec9f52"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.547195 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ht55p\" (UniqueName: \"kubernetes.io/projected/7a14477f-8284-44d8-8f6d-67f5efec9f52-kube-api-access-ht55p\") on node \"crc\" DevicePath \"\"" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.547251 4731 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.547264 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.547274 4731 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.607362 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7a14477f-8284-44d8-8f6d-67f5efec9f52" (UID: "7a14477f-8284-44d8-8f6d-67f5efec9f52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.641558 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-config-data" (OuterVolumeSpecName: "config-data") pod "7a14477f-8284-44d8-8f6d-67f5efec9f52" (UID: "7a14477f-8284-44d8-8f6d-67f5efec9f52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.649094 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.649121 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7a14477f-8284-44d8-8f6d-67f5efec9f52-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.784227 4731 generic.go:334] "Generic (PLEG): container finished" podID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerID="0e36718173a6401b91afa789ac9ee20a2ed4c986e2ec44aaed9ad7c2e9f04d9e" exitCode=0 Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.784312 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.784353 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7a14477f-8284-44d8-8f6d-67f5efec9f52","Type":"ContainerDied","Data":"0e36718173a6401b91afa789ac9ee20a2ed4c986e2ec44aaed9ad7c2e9f04d9e"} Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.784384 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"7a14477f-8284-44d8-8f6d-67f5efec9f52","Type":"ContainerDied","Data":"3a2c823c3fc992f7bf53a1dd842b87ba0f1d845e1536cd2d6214707a5d4f548d"} Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.784401 4731 scope.go:117] "RemoveContainer" containerID="0e36718173a6401b91afa789ac9ee20a2ed4c986e2ec44aaed9ad7c2e9f04d9e" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.786863 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4pq5p" event={"ID":"1848bcec-6275-4c4c-a4da-954d9798b9a8","Type":"ContainerStarted","Data":"55ec8ddcf677257b2db9cb26cdf8f37d96e5fa01f3d55b41ab7feef40272d5a4"} Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.807400 4731 scope.go:117] "RemoveContainer" containerID="ab1012869a3ac8e7186a4f726c1f7b8efc2edc50a68924fbc07d6b2e7c2e4266" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.824088 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4pq5p" podStartSLOduration=19.369315352 podStartE2EDuration="22.824064058s" podCreationTimestamp="2025-11-25 15:49:28 +0000 UTC" firstStartedPulling="2025-11-25 15:49:46.722232606 +0000 UTC m=+3345.469367036" lastFinishedPulling="2025-11-25 15:49:50.176981312 +0000 UTC m=+3348.924115742" observedRunningTime="2025-11-25 15:49:50.817190697 +0000 UTC m=+3349.564325127" watchObservedRunningTime="2025-11-25 15:49:50.824064058 +0000 UTC m=+3349.571198488" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.833560 4731 scope.go:117] "RemoveContainer" containerID="c8a8333e920150e83fd0bbc497123a972502a772a8d84239b8ce853975bec739" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.843791 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.857385 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.860809 4731 scope.go:117] "RemoveContainer" containerID="3132c7b7bd744f60e07edef53dc0d43c075d211490acb44a6653d9efe3c7f0c3" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.872886 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 25 15:49:50 crc kubenswrapper[4731]: E1125 15:49:50.873394 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerName="aodh-listener" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.873417 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerName="aodh-listener" Nov 25 15:49:50 crc kubenswrapper[4731]: E1125 15:49:50.873439 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerName="aodh-api" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.873446 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerName="aodh-api" Nov 25 15:49:50 crc kubenswrapper[4731]: E1125 15:49:50.873459 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerName="aodh-notifier" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.873467 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerName="aodh-notifier" Nov 25 15:49:50 crc kubenswrapper[4731]: E1125 15:49:50.873489 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerName="aodh-evaluator" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.873495 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerName="aodh-evaluator" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.873850 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerName="aodh-listener" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.873879 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerName="aodh-notifier" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.873899 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerName="aodh-api" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.873915 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" containerName="aodh-evaluator" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.876007 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.881458 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.881777 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-h9h4w" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.882050 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.882193 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.882375 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.893951 4731 scope.go:117] "RemoveContainer" containerID="0e36718173a6401b91afa789ac9ee20a2ed4c986e2ec44aaed9ad7c2e9f04d9e" Nov 25 15:49:50 crc kubenswrapper[4731]: E1125 15:49:50.896208 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e36718173a6401b91afa789ac9ee20a2ed4c986e2ec44aaed9ad7c2e9f04d9e\": container with ID starting with 0e36718173a6401b91afa789ac9ee20a2ed4c986e2ec44aaed9ad7c2e9f04d9e not found: ID does not exist" containerID="0e36718173a6401b91afa789ac9ee20a2ed4c986e2ec44aaed9ad7c2e9f04d9e" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.896267 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e36718173a6401b91afa789ac9ee20a2ed4c986e2ec44aaed9ad7c2e9f04d9e"} err="failed to get container status \"0e36718173a6401b91afa789ac9ee20a2ed4c986e2ec44aaed9ad7c2e9f04d9e\": rpc error: code = NotFound desc = could not find container \"0e36718173a6401b91afa789ac9ee20a2ed4c986e2ec44aaed9ad7c2e9f04d9e\": container with ID starting with 0e36718173a6401b91afa789ac9ee20a2ed4c986e2ec44aaed9ad7c2e9f04d9e not found: ID does not exist" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.896304 4731 scope.go:117] "RemoveContainer" containerID="ab1012869a3ac8e7186a4f726c1f7b8efc2edc50a68924fbc07d6b2e7c2e4266" Nov 25 15:49:50 crc kubenswrapper[4731]: E1125 15:49:50.896753 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab1012869a3ac8e7186a4f726c1f7b8efc2edc50a68924fbc07d6b2e7c2e4266\": container with ID starting with ab1012869a3ac8e7186a4f726c1f7b8efc2edc50a68924fbc07d6b2e7c2e4266 not found: ID does not exist" containerID="ab1012869a3ac8e7186a4f726c1f7b8efc2edc50a68924fbc07d6b2e7c2e4266" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.896808 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab1012869a3ac8e7186a4f726c1f7b8efc2edc50a68924fbc07d6b2e7c2e4266"} err="failed to get container status \"ab1012869a3ac8e7186a4f726c1f7b8efc2edc50a68924fbc07d6b2e7c2e4266\": rpc error: code = NotFound desc = could not find container \"ab1012869a3ac8e7186a4f726c1f7b8efc2edc50a68924fbc07d6b2e7c2e4266\": container with ID starting with ab1012869a3ac8e7186a4f726c1f7b8efc2edc50a68924fbc07d6b2e7c2e4266 not found: ID does not exist" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.896858 4731 scope.go:117] "RemoveContainer" containerID="c8a8333e920150e83fd0bbc497123a972502a772a8d84239b8ce853975bec739" Nov 25 15:49:50 crc kubenswrapper[4731]: E1125 15:49:50.898427 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8a8333e920150e83fd0bbc497123a972502a772a8d84239b8ce853975bec739\": container with ID starting with c8a8333e920150e83fd0bbc497123a972502a772a8d84239b8ce853975bec739 not found: ID does not exist" containerID="c8a8333e920150e83fd0bbc497123a972502a772a8d84239b8ce853975bec739" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.898497 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a8333e920150e83fd0bbc497123a972502a772a8d84239b8ce853975bec739"} err="failed to get container status \"c8a8333e920150e83fd0bbc497123a972502a772a8d84239b8ce853975bec739\": rpc error: code = NotFound desc = could not find container \"c8a8333e920150e83fd0bbc497123a972502a772a8d84239b8ce853975bec739\": container with ID starting with c8a8333e920150e83fd0bbc497123a972502a772a8d84239b8ce853975bec739 not found: ID does not exist" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.898540 4731 scope.go:117] "RemoveContainer" containerID="3132c7b7bd744f60e07edef53dc0d43c075d211490acb44a6653d9efe3c7f0c3" Nov 25 15:49:50 crc kubenswrapper[4731]: E1125 15:49:50.898892 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3132c7b7bd744f60e07edef53dc0d43c075d211490acb44a6653d9efe3c7f0c3\": container with ID starting with 3132c7b7bd744f60e07edef53dc0d43c075d211490acb44a6653d9efe3c7f0c3 not found: ID does not exist" containerID="3132c7b7bd744f60e07edef53dc0d43c075d211490acb44a6653d9efe3c7f0c3" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.898916 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3132c7b7bd744f60e07edef53dc0d43c075d211490acb44a6653d9efe3c7f0c3"} err="failed to get container status \"3132c7b7bd744f60e07edef53dc0d43c075d211490acb44a6653d9efe3c7f0c3\": rpc error: code = NotFound desc = could not find container \"3132c7b7bd744f60e07edef53dc0d43c075d211490acb44a6653d9efe3c7f0c3\": container with ID starting with 3132c7b7bd744f60e07edef53dc0d43c075d211490acb44a6653d9efe3c7f0c3 not found: ID does not exist" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.911952 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.954571 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-internal-tls-certs\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.954656 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-config-data\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.954698 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.954726 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-public-tls-certs\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.954842 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-scripts\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:50 crc kubenswrapper[4731]: I1125 15:49:50.954896 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2dwn\" (UniqueName: \"kubernetes.io/projected/e68de59d-3f4f-46ce-9543-257fc725e94b-kube-api-access-b2dwn\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:51 crc kubenswrapper[4731]: I1125 15:49:51.057183 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-config-data\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:51 crc kubenswrapper[4731]: I1125 15:49:51.057266 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:51 crc kubenswrapper[4731]: I1125 15:49:51.057303 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-public-tls-certs\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:51 crc kubenswrapper[4731]: I1125 15:49:51.057374 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-scripts\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:51 crc kubenswrapper[4731]: I1125 15:49:51.057413 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2dwn\" (UniqueName: \"kubernetes.io/projected/e68de59d-3f4f-46ce-9543-257fc725e94b-kube-api-access-b2dwn\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:51 crc kubenswrapper[4731]: I1125 15:49:51.057486 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-internal-tls-certs\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:51 crc kubenswrapper[4731]: I1125 15:49:51.062686 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-scripts\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:51 crc kubenswrapper[4731]: I1125 15:49:51.063136 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-config-data\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:51 crc kubenswrapper[4731]: I1125 15:49:51.063271 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:51 crc kubenswrapper[4731]: I1125 15:49:51.064219 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-public-tls-certs\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:51 crc kubenswrapper[4731]: I1125 15:49:51.068034 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-internal-tls-certs\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:51 crc kubenswrapper[4731]: I1125 15:49:51.079854 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2dwn\" (UniqueName: \"kubernetes.io/projected/e68de59d-3f4f-46ce-9543-257fc725e94b-kube-api-access-b2dwn\") pod \"aodh-0\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " pod="openstack/aodh-0" Nov 25 15:49:51 crc kubenswrapper[4731]: I1125 15:49:51.199335 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:49:51 crc kubenswrapper[4731]: W1125 15:49:51.689588 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode68de59d_3f4f_46ce_9543_257fc725e94b.slice/crio-dc057c73805e1c9d5a0c4b532c9d13fcb7c6bb41f251a6901bc38e8638144838 WatchSource:0}: Error finding container dc057c73805e1c9d5a0c4b532c9d13fcb7c6bb41f251a6901bc38e8638144838: Status 404 returned error can't find the container with id dc057c73805e1c9d5a0c4b532c9d13fcb7c6bb41f251a6901bc38e8638144838 Nov 25 15:49:51 crc kubenswrapper[4731]: I1125 15:49:51.691604 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 15:49:51 crc kubenswrapper[4731]: I1125 15:49:51.796269 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e68de59d-3f4f-46ce-9543-257fc725e94b","Type":"ContainerStarted","Data":"dc057c73805e1c9d5a0c4b532c9d13fcb7c6bb41f251a6901bc38e8638144838"} Nov 25 15:49:52 crc kubenswrapper[4731]: I1125 15:49:52.097776 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:52 crc kubenswrapper[4731]: I1125 15:49:52.151270 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:52 crc kubenswrapper[4731]: I1125 15:49:52.341232 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ng695"] Nov 25 15:49:52 crc kubenswrapper[4731]: I1125 15:49:52.806951 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e68de59d-3f4f-46ce-9543-257fc725e94b","Type":"ContainerStarted","Data":"4b62654d8a1456ff154280cceb518c57721d537a81d6f32ba69ffa9d9a36600b"} Nov 25 15:49:52 crc kubenswrapper[4731]: I1125 15:49:52.832509 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a14477f-8284-44d8-8f6d-67f5efec9f52" path="/var/lib/kubelet/pods/7a14477f-8284-44d8-8f6d-67f5efec9f52/volumes" Nov 25 15:49:53 crc kubenswrapper[4731]: I1125 15:49:53.824083 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2vtlt" event={"ID":"cbf9673f-5c6b-41d7-8eba-cdc0414a3910","Type":"ContainerStarted","Data":"12772268afe4538eed9d4d95500cfdc02b1d22b5e28c23a39f18bba867657a64"} Nov 25 15:49:53 crc kubenswrapper[4731]: I1125 15:49:53.826053 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e68de59d-3f4f-46ce-9543-257fc725e94b","Type":"ContainerStarted","Data":"cd90116793f6c00f619eb14840467d104a59fdfa44339b80b899dd1b4ff0ce5e"} Nov 25 15:49:53 crc kubenswrapper[4731]: I1125 15:49:53.826260 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ng695" podUID="7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" containerName="registry-server" containerID="cri-o://8724e037d3e0b61f61b36f4fb35fbe8dbb19496e289e860610ad78616659512d" gracePeriod=2 Nov 25 15:49:53 crc kubenswrapper[4731]: I1125 15:49:53.855297 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-2vtlt" podStartSLOduration=3.067040526 podStartE2EDuration="37.855272211s" podCreationTimestamp="2025-11-25 15:49:16 +0000 UTC" firstStartedPulling="2025-11-25 15:49:17.842975383 +0000 UTC m=+3316.590109803" lastFinishedPulling="2025-11-25 15:49:52.631207048 +0000 UTC m=+3351.378341488" observedRunningTime="2025-11-25 15:49:53.843621912 +0000 UTC m=+3352.590756352" watchObservedRunningTime="2025-11-25 15:49:53.855272211 +0000 UTC m=+3352.602406651" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.535409 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.624363 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-utilities\") pod \"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0\" (UID: \"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0\") " Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.624452 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-catalog-content\") pod \"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0\" (UID: \"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0\") " Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.624765 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4jlp\" (UniqueName: \"kubernetes.io/projected/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-kube-api-access-s4jlp\") pod \"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0\" (UID: \"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0\") " Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.625347 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-utilities" (OuterVolumeSpecName: "utilities") pod "7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" (UID: "7f48b2f1-207e-40ad-a10d-b99abf8c1eb0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.647709 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-kube-api-access-s4jlp" (OuterVolumeSpecName: "kube-api-access-s4jlp") pod "7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" (UID: "7f48b2f1-207e-40ad-a10d-b99abf8c1eb0"). InnerVolumeSpecName "kube-api-access-s4jlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.726836 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4jlp\" (UniqueName: \"kubernetes.io/projected/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-kube-api-access-s4jlp\") on node \"crc\" DevicePath \"\"" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.726870 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.751378 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" (UID: "7f48b2f1-207e-40ad-a10d-b99abf8c1eb0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.829422 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.836404 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-6x8xb" event={"ID":"236a458e-0795-42ad-8dab-84407618f96d","Type":"ContainerStarted","Data":"759d2152c99a57c69aecd8ffb2a23995d2b8e2c4643f5a1abbea240adb0bf6ae"} Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.839202 4731 generic.go:334] "Generic (PLEG): container finished" podID="7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" containerID="8724e037d3e0b61f61b36f4fb35fbe8dbb19496e289e860610ad78616659512d" exitCode=0 Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.839259 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng695" event={"ID":"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0","Type":"ContainerDied","Data":"8724e037d3e0b61f61b36f4fb35fbe8dbb19496e289e860610ad78616659512d"} Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.839283 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ng695" event={"ID":"7f48b2f1-207e-40ad-a10d-b99abf8c1eb0","Type":"ContainerDied","Data":"58ded8a2607823f90087680288f85546c2a9b5ab9af61186434d269445a8322b"} Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.839301 4731 scope.go:117] "RemoveContainer" containerID="8724e037d3e0b61f61b36f4fb35fbe8dbb19496e289e860610ad78616659512d" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.839418 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ng695" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.842557 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e68de59d-3f4f-46ce-9543-257fc725e94b","Type":"ContainerStarted","Data":"f16595105bcde2838d43a40ead398941bfb4cb3546eb530e6906028b4f484717"} Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.858444 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-6x8xb" podStartSLOduration=2.7278747599999997 podStartE2EDuration="38.858424945s" podCreationTimestamp="2025-11-25 15:49:16 +0000 UTC" firstStartedPulling="2025-11-25 15:49:17.988457818 +0000 UTC m=+3316.735592248" lastFinishedPulling="2025-11-25 15:49:54.119007993 +0000 UTC m=+3352.866142433" observedRunningTime="2025-11-25 15:49:54.855742519 +0000 UTC m=+3353.602876979" watchObservedRunningTime="2025-11-25 15:49:54.858424945 +0000 UTC m=+3353.605559375" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.895356 4731 scope.go:117] "RemoveContainer" containerID="4c16c38d624e8b270ed6193d7738fa01906b9ac9cf1086884e2375169efcf1dd" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.927880 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ng695"] Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.942263 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ng695"] Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.944673 4731 scope.go:117] "RemoveContainer" containerID="9c7ceeeef694ab21ae7e1942b43b4abcd8b801872d3b0e7f4427fce1ac1d6aea" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.985144 4731 scope.go:117] "RemoveContainer" containerID="8724e037d3e0b61f61b36f4fb35fbe8dbb19496e289e860610ad78616659512d" Nov 25 15:49:54 crc kubenswrapper[4731]: E1125 15:49:54.985633 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8724e037d3e0b61f61b36f4fb35fbe8dbb19496e289e860610ad78616659512d\": container with ID starting with 8724e037d3e0b61f61b36f4fb35fbe8dbb19496e289e860610ad78616659512d not found: ID does not exist" containerID="8724e037d3e0b61f61b36f4fb35fbe8dbb19496e289e860610ad78616659512d" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.985665 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8724e037d3e0b61f61b36f4fb35fbe8dbb19496e289e860610ad78616659512d"} err="failed to get container status \"8724e037d3e0b61f61b36f4fb35fbe8dbb19496e289e860610ad78616659512d\": rpc error: code = NotFound desc = could not find container \"8724e037d3e0b61f61b36f4fb35fbe8dbb19496e289e860610ad78616659512d\": container with ID starting with 8724e037d3e0b61f61b36f4fb35fbe8dbb19496e289e860610ad78616659512d not found: ID does not exist" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.985686 4731 scope.go:117] "RemoveContainer" containerID="4c16c38d624e8b270ed6193d7738fa01906b9ac9cf1086884e2375169efcf1dd" Nov 25 15:49:54 crc kubenswrapper[4731]: E1125 15:49:54.986063 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c16c38d624e8b270ed6193d7738fa01906b9ac9cf1086884e2375169efcf1dd\": container with ID starting with 4c16c38d624e8b270ed6193d7738fa01906b9ac9cf1086884e2375169efcf1dd not found: ID does not exist" containerID="4c16c38d624e8b270ed6193d7738fa01906b9ac9cf1086884e2375169efcf1dd" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.986800 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c16c38d624e8b270ed6193d7738fa01906b9ac9cf1086884e2375169efcf1dd"} err="failed to get container status \"4c16c38d624e8b270ed6193d7738fa01906b9ac9cf1086884e2375169efcf1dd\": rpc error: code = NotFound desc = could not find container \"4c16c38d624e8b270ed6193d7738fa01906b9ac9cf1086884e2375169efcf1dd\": container with ID starting with 4c16c38d624e8b270ed6193d7738fa01906b9ac9cf1086884e2375169efcf1dd not found: ID does not exist" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.986886 4731 scope.go:117] "RemoveContainer" containerID="9c7ceeeef694ab21ae7e1942b43b4abcd8b801872d3b0e7f4427fce1ac1d6aea" Nov 25 15:49:54 crc kubenswrapper[4731]: E1125 15:49:54.987303 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c7ceeeef694ab21ae7e1942b43b4abcd8b801872d3b0e7f4427fce1ac1d6aea\": container with ID starting with 9c7ceeeef694ab21ae7e1942b43b4abcd8b801872d3b0e7f4427fce1ac1d6aea not found: ID does not exist" containerID="9c7ceeeef694ab21ae7e1942b43b4abcd8b801872d3b0e7f4427fce1ac1d6aea" Nov 25 15:49:54 crc kubenswrapper[4731]: I1125 15:49:54.987335 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c7ceeeef694ab21ae7e1942b43b4abcd8b801872d3b0e7f4427fce1ac1d6aea"} err="failed to get container status \"9c7ceeeef694ab21ae7e1942b43b4abcd8b801872d3b0e7f4427fce1ac1d6aea\": rpc error: code = NotFound desc = could not find container \"9c7ceeeef694ab21ae7e1942b43b4abcd8b801872d3b0e7f4427fce1ac1d6aea\": container with ID starting with 9c7ceeeef694ab21ae7e1942b43b4abcd8b801872d3b0e7f4427fce1ac1d6aea not found: ID does not exist" Nov 25 15:49:55 crc kubenswrapper[4731]: I1125 15:49:55.853850 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e68de59d-3f4f-46ce-9543-257fc725e94b","Type":"ContainerStarted","Data":"822474fa84985c19afe3de131a6e0b0e979dd694de3ab871fca7641a86181a85"} Nov 25 15:49:55 crc kubenswrapper[4731]: I1125 15:49:55.888959 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.480144155 podStartE2EDuration="5.888936129s" podCreationTimestamp="2025-11-25 15:49:50 +0000 UTC" firstStartedPulling="2025-11-25 15:49:51.694435213 +0000 UTC m=+3350.441569643" lastFinishedPulling="2025-11-25 15:49:55.103227187 +0000 UTC m=+3353.850361617" observedRunningTime="2025-11-25 15:49:55.882187751 +0000 UTC m=+3354.629322191" watchObservedRunningTime="2025-11-25 15:49:55.888936129 +0000 UTC m=+3354.636070569" Nov 25 15:49:56 crc kubenswrapper[4731]: I1125 15:49:56.825874 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" path="/var/lib/kubelet/pods/7f48b2f1-207e-40ad-a10d-b99abf8c1eb0/volumes" Nov 25 15:49:58 crc kubenswrapper[4731]: I1125 15:49:58.630767 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:49:58 crc kubenswrapper[4731]: I1125 15:49:58.631360 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:49:58 crc kubenswrapper[4731]: I1125 15:49:58.690279 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:49:58 crc kubenswrapper[4731]: I1125 15:49:58.889863 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-9s4pj" event={"ID":"60ec713e-70e1-4fbf-b011-b6e4bcf87539","Type":"ContainerStarted","Data":"032fa8d7536809324150352745d58f0d00287c9069e9f25c4767225d46bfea0e"} Nov 25 15:49:58 crc kubenswrapper[4731]: I1125 15:49:58.890338 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-9s4pj" Nov 25 15:49:58 crc kubenswrapper[4731]: I1125 15:49:58.893501 4731 patch_prober.go:28] interesting pod/observability-operator-d8bb48f5d-9s4pj container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.1.3:8081/healthz\": dial tcp 10.217.1.3:8081: connect: connection refused" start-of-body= Nov 25 15:49:58 crc kubenswrapper[4731]: I1125 15:49:58.893577 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-d8bb48f5d-9s4pj" podUID="60ec713e-70e1-4fbf-b011-b6e4bcf87539" containerName="operator" probeResult="failure" output="Get \"http://10.217.1.3:8081/healthz\": dial tcp 10.217.1.3:8081: connect: connection refused" Nov 25 15:49:58 crc kubenswrapper[4731]: I1125 15:49:58.917560 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-9s4pj" podStartSLOduration=1.371471804 podStartE2EDuration="41.917517316s" podCreationTimestamp="2025-11-25 15:49:17 +0000 UTC" firstStartedPulling="2025-11-25 15:49:18.066912817 +0000 UTC m=+3316.814047247" lastFinishedPulling="2025-11-25 15:49:58.612958319 +0000 UTC m=+3357.360092759" observedRunningTime="2025-11-25 15:49:58.911871275 +0000 UTC m=+3357.659005705" watchObservedRunningTime="2025-11-25 15:49:58.917517316 +0000 UTC m=+3357.664651746" Nov 25 15:49:58 crc kubenswrapper[4731]: I1125 15:49:58.947638 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:49:59 crc kubenswrapper[4731]: I1125 15:49:59.746365 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4pq5p"] Nov 25 15:49:59 crc kubenswrapper[4731]: I1125 15:49:59.956637 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-9s4pj" Nov 25 15:50:00 crc kubenswrapper[4731]: I1125 15:50:00.910227 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-wdpx7" event={"ID":"900460b9-59fd-4456-925b-ed34b683ac09","Type":"ContainerStarted","Data":"76e9a6b9fe0bae95543bd287c75df6a346bda76016c36377e380f25368790744"} Nov 25 15:50:00 crc kubenswrapper[4731]: I1125 15:50:00.910515 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4pq5p" podUID="1848bcec-6275-4c4c-a4da-954d9798b9a8" containerName="registry-server" containerID="cri-o://55ec8ddcf677257b2db9cb26cdf8f37d96e5fa01f3d55b41ab7feef40272d5a4" gracePeriod=2 Nov 25 15:50:00 crc kubenswrapper[4731]: I1125 15:50:00.930229 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 25 15:50:00 crc kubenswrapper[4731]: E1125 15:50:00.930720 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" containerName="extract-content" Nov 25 15:50:00 crc kubenswrapper[4731]: I1125 15:50:00.930737 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" containerName="extract-content" Nov 25 15:50:00 crc kubenswrapper[4731]: E1125 15:50:00.930749 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" containerName="extract-utilities" Nov 25 15:50:00 crc kubenswrapper[4731]: I1125 15:50:00.930755 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" containerName="extract-utilities" Nov 25 15:50:00 crc kubenswrapper[4731]: E1125 15:50:00.930768 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" containerName="registry-server" Nov 25 15:50:00 crc kubenswrapper[4731]: I1125 15:50:00.930774 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" containerName="registry-server" Nov 25 15:50:00 crc kubenswrapper[4731]: I1125 15:50:00.931006 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f48b2f1-207e-40ad-a10d-b99abf8c1eb0" containerName="registry-server" Nov 25 15:50:00 crc kubenswrapper[4731]: I1125 15:50:00.943803 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:00 crc kubenswrapper[4731]: I1125 15:50:00.948913 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Nov 25 15:50:00 crc kubenswrapper[4731]: I1125 15:50:00.948968 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Nov 25 15:50:00 crc kubenswrapper[4731]: I1125 15:50:00.949189 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Nov 25 15:50:00 crc kubenswrapper[4731]: I1125 15:50:00.949269 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Nov 25 15:50:00 crc kubenswrapper[4731]: I1125 15:50:00.950692 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-69fb756767-wdpx7" podStartSLOduration=-9223371991.904116 podStartE2EDuration="44.950660931s" podCreationTimestamp="2025-11-25 15:49:16 +0000 UTC" firstStartedPulling="2025-11-25 15:49:17.901800494 +0000 UTC m=+3316.648934924" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:50:00.931244728 +0000 UTC m=+3359.678379158" watchObservedRunningTime="2025-11-25 15:50:00.950660931 +0000 UTC m=+3359.697795381" Nov 25 15:50:00 crc kubenswrapper[4731]: I1125 15:50:00.956982 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-84q2f" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.041700 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.065050 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.065135 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.065164 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8ppq\" (UniqueName: \"kubernetes.io/projected/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-kube-api-access-p8ppq\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.065199 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.065221 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.065243 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.065373 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.169218 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.169300 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.169330 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8ppq\" (UniqueName: \"kubernetes.io/projected/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-kube-api-access-p8ppq\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.169356 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.169379 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.169399 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.169532 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.171063 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.176712 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.180375 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.180424 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.180504 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.191902 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.203389 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8ppq\" (UniqueName: \"kubernetes.io/projected/7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a-kube-api-access-p8ppq\") pod \"alertmanager-metric-storage-0\" (UID: \"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a\") " pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.285904 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.406406 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.487789 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q775v\" (UniqueName: \"kubernetes.io/projected/1848bcec-6275-4c4c-a4da-954d9798b9a8-kube-api-access-q775v\") pod \"1848bcec-6275-4c4c-a4da-954d9798b9a8\" (UID: \"1848bcec-6275-4c4c-a4da-954d9798b9a8\") " Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.488148 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1848bcec-6275-4c4c-a4da-954d9798b9a8-utilities\") pod \"1848bcec-6275-4c4c-a4da-954d9798b9a8\" (UID: \"1848bcec-6275-4c4c-a4da-954d9798b9a8\") " Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.488243 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1848bcec-6275-4c4c-a4da-954d9798b9a8-catalog-content\") pod \"1848bcec-6275-4c4c-a4da-954d9798b9a8\" (UID: \"1848bcec-6275-4c4c-a4da-954d9798b9a8\") " Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.493632 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1848bcec-6275-4c4c-a4da-954d9798b9a8-utilities" (OuterVolumeSpecName: "utilities") pod "1848bcec-6275-4c4c-a4da-954d9798b9a8" (UID: "1848bcec-6275-4c4c-a4da-954d9798b9a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.494229 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1848bcec-6275-4c4c-a4da-954d9798b9a8-kube-api-access-q775v" (OuterVolumeSpecName: "kube-api-access-q775v") pod "1848bcec-6275-4c4c-a4da-954d9798b9a8" (UID: "1848bcec-6275-4c4c-a4da-954d9798b9a8"). InnerVolumeSpecName "kube-api-access-q775v". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.552002 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:50:01 crc kubenswrapper[4731]: E1125 15:50:01.552489 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1848bcec-6275-4c4c-a4da-954d9798b9a8" containerName="registry-server" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.552502 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1848bcec-6275-4c4c-a4da-954d9798b9a8" containerName="registry-server" Nov 25 15:50:01 crc kubenswrapper[4731]: E1125 15:50:01.552533 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1848bcec-6275-4c4c-a4da-954d9798b9a8" containerName="extract-utilities" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.552541 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1848bcec-6275-4c4c-a4da-954d9798b9a8" containerName="extract-utilities" Nov 25 15:50:01 crc kubenswrapper[4731]: E1125 15:50:01.552553 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1848bcec-6275-4c4c-a4da-954d9798b9a8" containerName="extract-content" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.552559 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="1848bcec-6275-4c4c-a4da-954d9798b9a8" containerName="extract-content" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.552746 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="1848bcec-6275-4c4c-a4da-954d9798b9a8" containerName="registry-server" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.554651 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.558652 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-sf6fp" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.558912 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.559073 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.560612 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.561693 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.566236 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.569874 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.581250 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1848bcec-6275-4c4c-a4da-954d9798b9a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1848bcec-6275-4c4c-a4da-954d9798b9a8" (UID: "1848bcec-6275-4c4c-a4da-954d9798b9a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.593222 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/524074c0-3531-4cd8-afba-5672252b6beb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.593380 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.593434 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-config\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.593561 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfdk7\" (UniqueName: \"kubernetes.io/projected/524074c0-3531-4cd8-afba-5672252b6beb-kube-api-access-sfdk7\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.593825 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/524074c0-3531-4cd8-afba-5672252b6beb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.594095 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.594147 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.594173 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/524074c0-3531-4cd8-afba-5672252b6beb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.594336 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q775v\" (UniqueName: \"kubernetes.io/projected/1848bcec-6275-4c4c-a4da-954d9798b9a8-kube-api-access-q775v\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.594350 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1848bcec-6275-4c4c-a4da-954d9798b9a8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.594361 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1848bcec-6275-4c4c-a4da-954d9798b9a8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.696219 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.696278 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-config\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.696315 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfdk7\" (UniqueName: \"kubernetes.io/projected/524074c0-3531-4cd8-afba-5672252b6beb-kube-api-access-sfdk7\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.696411 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/524074c0-3531-4cd8-afba-5672252b6beb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.696505 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.696534 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/524074c0-3531-4cd8-afba-5672252b6beb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.696554 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.696577 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/524074c0-3531-4cd8-afba-5672252b6beb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.697457 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/524074c0-3531-4cd8-afba-5672252b6beb-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.702808 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-config\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.704381 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.705263 4731 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.705286 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2b406d8e239e15e30a1b7a6e746a41dfccba14b3e845aa953b91df4689559c85/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.705807 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/524074c0-3531-4cd8-afba-5672252b6beb-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.707286 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.708397 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/524074c0-3531-4cd8-afba-5672252b6beb-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.726316 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfdk7\" (UniqueName: \"kubernetes.io/projected/524074c0-3531-4cd8-afba-5672252b6beb-kube-api-access-sfdk7\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.764351 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\") pod \"prometheus-metric-storage-0\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.908641 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.923260 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-qz72z" event={"ID":"bfcf5cee-d085-4fb4-8144-9a38143246b4","Type":"ContainerStarted","Data":"cf9bc483204926902915e648cdbdb69e67fd5fbd5ae2d66030a213bb7236526d"} Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.923585 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-qz72z" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.928020 4731 generic.go:334] "Generic (PLEG): container finished" podID="1848bcec-6275-4c4c-a4da-954d9798b9a8" containerID="55ec8ddcf677257b2db9cb26cdf8f37d96e5fa01f3d55b41ab7feef40272d5a4" exitCode=0 Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.928057 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4pq5p" event={"ID":"1848bcec-6275-4c4c-a4da-954d9798b9a8","Type":"ContainerDied","Data":"55ec8ddcf677257b2db9cb26cdf8f37d96e5fa01f3d55b41ab7feef40272d5a4"} Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.928101 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4pq5p" event={"ID":"1848bcec-6275-4c4c-a4da-954d9798b9a8","Type":"ContainerDied","Data":"5dadc9c45a9dfbc52f0b9d132bbc26c0c9dd12f78679e8ceb5234795d329948a"} Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.928119 4731 scope.go:117] "RemoveContainer" containerID="55ec8ddcf677257b2db9cb26cdf8f37d96e5fa01f3d55b41ab7feef40272d5a4" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.928271 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4pq5p" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.959366 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-qz72z" podStartSLOduration=2.38687471 podStartE2EDuration="44.959340591s" podCreationTimestamp="2025-11-25 15:49:17 +0000 UTC" firstStartedPulling="2025-11-25 15:49:18.415959338 +0000 UTC m=+3317.163093768" lastFinishedPulling="2025-11-25 15:50:00.988425219 +0000 UTC m=+3359.735559649" observedRunningTime="2025-11-25 15:50:01.945654871 +0000 UTC m=+3360.692789301" watchObservedRunningTime="2025-11-25 15:50:01.959340591 +0000 UTC m=+3360.706475021" Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.976654 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4pq5p"] Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.986359 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4pq5p"] Nov 25 15:50:01 crc kubenswrapper[4731]: I1125 15:50:01.991933 4731 scope.go:117] "RemoveContainer" containerID="120f330f9c29b536a57112fa073a5cf3c95e8e1d94f4b75b844b9f6446201837" Nov 25 15:50:02 crc kubenswrapper[4731]: I1125 15:50:02.039845 4731 scope.go:117] "RemoveContainer" containerID="94e991ec4605bc6d84c46d50e0e268370d8189549ef94e55caa14a78dae2d0a2" Nov 25 15:50:02 crc kubenswrapper[4731]: I1125 15:50:02.069480 4731 scope.go:117] "RemoveContainer" containerID="55ec8ddcf677257b2db9cb26cdf8f37d96e5fa01f3d55b41ab7feef40272d5a4" Nov 25 15:50:02 crc kubenswrapper[4731]: E1125 15:50:02.069969 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55ec8ddcf677257b2db9cb26cdf8f37d96e5fa01f3d55b41ab7feef40272d5a4\": container with ID starting with 55ec8ddcf677257b2db9cb26cdf8f37d96e5fa01f3d55b41ab7feef40272d5a4 not found: ID does not exist" containerID="55ec8ddcf677257b2db9cb26cdf8f37d96e5fa01f3d55b41ab7feef40272d5a4" Nov 25 15:50:02 crc kubenswrapper[4731]: I1125 15:50:02.070010 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55ec8ddcf677257b2db9cb26cdf8f37d96e5fa01f3d55b41ab7feef40272d5a4"} err="failed to get container status \"55ec8ddcf677257b2db9cb26cdf8f37d96e5fa01f3d55b41ab7feef40272d5a4\": rpc error: code = NotFound desc = could not find container \"55ec8ddcf677257b2db9cb26cdf8f37d96e5fa01f3d55b41ab7feef40272d5a4\": container with ID starting with 55ec8ddcf677257b2db9cb26cdf8f37d96e5fa01f3d55b41ab7feef40272d5a4 not found: ID does not exist" Nov 25 15:50:02 crc kubenswrapper[4731]: I1125 15:50:02.070036 4731 scope.go:117] "RemoveContainer" containerID="120f330f9c29b536a57112fa073a5cf3c95e8e1d94f4b75b844b9f6446201837" Nov 25 15:50:02 crc kubenswrapper[4731]: E1125 15:50:02.070445 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"120f330f9c29b536a57112fa073a5cf3c95e8e1d94f4b75b844b9f6446201837\": container with ID starting with 120f330f9c29b536a57112fa073a5cf3c95e8e1d94f4b75b844b9f6446201837 not found: ID does not exist" containerID="120f330f9c29b536a57112fa073a5cf3c95e8e1d94f4b75b844b9f6446201837" Nov 25 15:50:02 crc kubenswrapper[4731]: I1125 15:50:02.070472 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"120f330f9c29b536a57112fa073a5cf3c95e8e1d94f4b75b844b9f6446201837"} err="failed to get container status \"120f330f9c29b536a57112fa073a5cf3c95e8e1d94f4b75b844b9f6446201837\": rpc error: code = NotFound desc = could not find container \"120f330f9c29b536a57112fa073a5cf3c95e8e1d94f4b75b844b9f6446201837\": container with ID starting with 120f330f9c29b536a57112fa073a5cf3c95e8e1d94f4b75b844b9f6446201837 not found: ID does not exist" Nov 25 15:50:02 crc kubenswrapper[4731]: I1125 15:50:02.070487 4731 scope.go:117] "RemoveContainer" containerID="94e991ec4605bc6d84c46d50e0e268370d8189549ef94e55caa14a78dae2d0a2" Nov 25 15:50:02 crc kubenswrapper[4731]: E1125 15:50:02.070905 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94e991ec4605bc6d84c46d50e0e268370d8189549ef94e55caa14a78dae2d0a2\": container with ID starting with 94e991ec4605bc6d84c46d50e0e268370d8189549ef94e55caa14a78dae2d0a2 not found: ID does not exist" containerID="94e991ec4605bc6d84c46d50e0e268370d8189549ef94e55caa14a78dae2d0a2" Nov 25 15:50:02 crc kubenswrapper[4731]: I1125 15:50:02.070936 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94e991ec4605bc6d84c46d50e0e268370d8189549ef94e55caa14a78dae2d0a2"} err="failed to get container status \"94e991ec4605bc6d84c46d50e0e268370d8189549ef94e55caa14a78dae2d0a2\": rpc error: code = NotFound desc = could not find container \"94e991ec4605bc6d84c46d50e0e268370d8189549ef94e55caa14a78dae2d0a2\": container with ID starting with 94e991ec4605bc6d84c46d50e0e268370d8189549ef94e55caa14a78dae2d0a2 not found: ID does not exist" Nov 25 15:50:02 crc kubenswrapper[4731]: I1125 15:50:02.343683 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Nov 25 15:50:02 crc kubenswrapper[4731]: I1125 15:50:02.534955 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:50:02 crc kubenswrapper[4731]: W1125 15:50:02.539196 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod524074c0_3531_4cd8_afba_5672252b6beb.slice/crio-38f6c3db3b9ebd22e722ac91cbd98be5745703144e9d1c2b5a2efb21481a629f WatchSource:0}: Error finding container 38f6c3db3b9ebd22e722ac91cbd98be5745703144e9d1c2b5a2efb21481a629f: Status 404 returned error can't find the container with id 38f6c3db3b9ebd22e722ac91cbd98be5745703144e9d1c2b5a2efb21481a629f Nov 25 15:50:02 crc kubenswrapper[4731]: I1125 15:50:02.824570 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1848bcec-6275-4c4c-a4da-954d9798b9a8" path="/var/lib/kubelet/pods/1848bcec-6275-4c4c-a4da-954d9798b9a8/volumes" Nov 25 15:50:02 crc kubenswrapper[4731]: I1125 15:50:02.937391 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a","Type":"ContainerStarted","Data":"feaf6cf865c590ae44eeb7c7acf500ccc3eaa5f00646955a20fea7a932c8c062"} Nov 25 15:50:02 crc kubenswrapper[4731]: I1125 15:50:02.940804 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"524074c0-3531-4cd8-afba-5672252b6beb","Type":"ContainerStarted","Data":"38f6c3db3b9ebd22e722ac91cbd98be5745703144e9d1c2b5a2efb21481a629f"} Nov 25 15:50:07 crc kubenswrapper[4731]: I1125 15:50:07.700085 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-qz72z" Nov 25 15:50:08 crc kubenswrapper[4731]: I1125 15:50:08.998117 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a","Type":"ContainerStarted","Data":"02792f6d1e66df01312607d8e531cd07300c74e59fc9cb4e07c9b28a5c23b5c4"} Nov 25 15:50:09 crc kubenswrapper[4731]: I1125 15:50:08.999717 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"524074c0-3531-4cd8-afba-5672252b6beb","Type":"ContainerStarted","Data":"b805c426a9a46d3f1883dd6ba57a615430a95bb234937fbd3e6080aa9c94d225"} Nov 25 15:50:15 crc kubenswrapper[4731]: I1125 15:50:15.057631 4731 generic.go:334] "Generic (PLEG): container finished" podID="7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a" containerID="02792f6d1e66df01312607d8e531cd07300c74e59fc9cb4e07c9b28a5c23b5c4" exitCode=0 Nov 25 15:50:15 crc kubenswrapper[4731]: I1125 15:50:15.057713 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a","Type":"ContainerDied","Data":"02792f6d1e66df01312607d8e531cd07300c74e59fc9cb4e07c9b28a5c23b5c4"} Nov 25 15:50:15 crc kubenswrapper[4731]: I1125 15:50:15.060503 4731 generic.go:334] "Generic (PLEG): container finished" podID="524074c0-3531-4cd8-afba-5672252b6beb" containerID="b805c426a9a46d3f1883dd6ba57a615430a95bb234937fbd3e6080aa9c94d225" exitCode=0 Nov 25 15:50:15 crc kubenswrapper[4731]: I1125 15:50:15.060537 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"524074c0-3531-4cd8-afba-5672252b6beb","Type":"ContainerDied","Data":"b805c426a9a46d3f1883dd6ba57a615430a95bb234937fbd3e6080aa9c94d225"} Nov 25 15:50:18 crc kubenswrapper[4731]: I1125 15:50:18.103375 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a","Type":"ContainerStarted","Data":"38ce94fff6cb3aa08f31452d87e7d12821007381c1a346058025993b18561280"} Nov 25 15:50:22 crc kubenswrapper[4731]: I1125 15:50:22.171202 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a","Type":"ContainerStarted","Data":"42adc5cead250120c21465c0be6a37a60950a9fa7791779b6f73d028a4e58085"} Nov 25 15:50:22 crc kubenswrapper[4731]: I1125 15:50:22.171895 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:22 crc kubenswrapper[4731]: I1125 15:50:22.176425 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Nov 25 15:50:22 crc kubenswrapper[4731]: I1125 15:50:22.210252 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=7.170476007 podStartE2EDuration="22.210226129s" podCreationTimestamp="2025-11-25 15:50:00 +0000 UTC" firstStartedPulling="2025-11-25 15:50:02.348311286 +0000 UTC m=+3361.095445716" lastFinishedPulling="2025-11-25 15:50:17.388061408 +0000 UTC m=+3376.135195838" observedRunningTime="2025-11-25 15:50:22.199959734 +0000 UTC m=+3380.947094194" watchObservedRunningTime="2025-11-25 15:50:22.210226129 +0000 UTC m=+3380.957360579" Nov 25 15:50:23 crc kubenswrapper[4731]: I1125 15:50:23.182360 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"524074c0-3531-4cd8-afba-5672252b6beb","Type":"ContainerStarted","Data":"39cbf3a418fd33984167f438bd04a8e35ba71539fcf60a55b6365ba915fea911"} Nov 25 15:50:26 crc kubenswrapper[4731]: I1125 15:50:26.231871 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"524074c0-3531-4cd8-afba-5672252b6beb","Type":"ContainerStarted","Data":"bf61156db68abdf4e902f22b100ec179f01c6f26b8c562b540023ae2193dcdeb"} Nov 25 15:50:30 crc kubenswrapper[4731]: I1125 15:50:30.273153 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"524074c0-3531-4cd8-afba-5672252b6beb","Type":"ContainerStarted","Data":"a6176630ef1f1e03ebaf1a0cc2a14b2aeedfbc408e26c823f160e418b17be1b6"} Nov 25 15:50:30 crc kubenswrapper[4731]: I1125 15:50:30.310922 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.679527961 podStartE2EDuration="30.310898154s" podCreationTimestamp="2025-11-25 15:50:00 +0000 UTC" firstStartedPulling="2025-11-25 15:50:02.542767888 +0000 UTC m=+3361.289902318" lastFinishedPulling="2025-11-25 15:50:29.174138081 +0000 UTC m=+3387.921272511" observedRunningTime="2025-11-25 15:50:30.298128447 +0000 UTC m=+3389.045262917" watchObservedRunningTime="2025-11-25 15:50:30.310898154 +0000 UTC m=+3389.058032594" Nov 25 15:50:31 crc kubenswrapper[4731]: I1125 15:50:31.909582 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:31 crc kubenswrapper[4731]: I1125 15:50:31.909921 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:31 crc kubenswrapper[4731]: I1125 15:50:31.912449 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:32 crc kubenswrapper[4731]: I1125 15:50:32.292171 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.708381 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.709182 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="18b7922d-f0bb-4b3c-90ff-c7456e477222" containerName="openstackclient" containerID="cri-o://6cbfd4c076113c645e122fce565f8e49a906726dd90e298769f123f97d58a5e3" gracePeriod=2 Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.719001 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.740997 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Nov 25 15:50:33 crc kubenswrapper[4731]: E1125 15:50:33.741507 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b7922d-f0bb-4b3c-90ff-c7456e477222" containerName="openstackclient" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.741547 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b7922d-f0bb-4b3c-90ff-c7456e477222" containerName="openstackclient" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.741862 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b7922d-f0bb-4b3c-90ff-c7456e477222" containerName="openstackclient" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.742785 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.749865 4731 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="18b7922d-f0bb-4b3c-90ff-c7456e477222" podUID="37c1e8ac-9952-41e7-9572-b2112cbd8d28" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.751997 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.778694 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c1e8ac-9952-41e7-9572-b2112cbd8d28-combined-ca-bundle\") pod \"openstackclient\" (UID: \"37c1e8ac-9952-41e7-9572-b2112cbd8d28\") " pod="openstack/openstackclient" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.778896 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m86p\" (UniqueName: \"kubernetes.io/projected/37c1e8ac-9952-41e7-9572-b2112cbd8d28-kube-api-access-7m86p\") pod \"openstackclient\" (UID: \"37c1e8ac-9952-41e7-9572-b2112cbd8d28\") " pod="openstack/openstackclient" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.779024 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/37c1e8ac-9952-41e7-9572-b2112cbd8d28-openstack-config\") pod \"openstackclient\" (UID: \"37c1e8ac-9952-41e7-9572-b2112cbd8d28\") " pod="openstack/openstackclient" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.779114 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/37c1e8ac-9952-41e7-9572-b2112cbd8d28-openstack-config-secret\") pod \"openstackclient\" (UID: \"37c1e8ac-9952-41e7-9572-b2112cbd8d28\") " pod="openstack/openstackclient" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.881039 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c1e8ac-9952-41e7-9572-b2112cbd8d28-combined-ca-bundle\") pod \"openstackclient\" (UID: \"37c1e8ac-9952-41e7-9572-b2112cbd8d28\") " pod="openstack/openstackclient" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.881157 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m86p\" (UniqueName: \"kubernetes.io/projected/37c1e8ac-9952-41e7-9572-b2112cbd8d28-kube-api-access-7m86p\") pod \"openstackclient\" (UID: \"37c1e8ac-9952-41e7-9572-b2112cbd8d28\") " pod="openstack/openstackclient" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.881203 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/37c1e8ac-9952-41e7-9572-b2112cbd8d28-openstack-config\") pod \"openstackclient\" (UID: \"37c1e8ac-9952-41e7-9572-b2112cbd8d28\") " pod="openstack/openstackclient" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.881251 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/37c1e8ac-9952-41e7-9572-b2112cbd8d28-openstack-config-secret\") pod \"openstackclient\" (UID: \"37c1e8ac-9952-41e7-9572-b2112cbd8d28\") " pod="openstack/openstackclient" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.882834 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/37c1e8ac-9952-41e7-9572-b2112cbd8d28-openstack-config\") pod \"openstackclient\" (UID: \"37c1e8ac-9952-41e7-9572-b2112cbd8d28\") " pod="openstack/openstackclient" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.892384 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/37c1e8ac-9952-41e7-9572-b2112cbd8d28-openstack-config-secret\") pod \"openstackclient\" (UID: \"37c1e8ac-9952-41e7-9572-b2112cbd8d28\") " pod="openstack/openstackclient" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.900086 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/37c1e8ac-9952-41e7-9572-b2112cbd8d28-combined-ca-bundle\") pod \"openstackclient\" (UID: \"37c1e8ac-9952-41e7-9572-b2112cbd8d28\") " pod="openstack/openstackclient" Nov 25 15:50:33 crc kubenswrapper[4731]: I1125 15:50:33.907159 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m86p\" (UniqueName: \"kubernetes.io/projected/37c1e8ac-9952-41e7-9572-b2112cbd8d28-kube-api-access-7m86p\") pod \"openstackclient\" (UID: \"37c1e8ac-9952-41e7-9572-b2112cbd8d28\") " pod="openstack/openstackclient" Nov 25 15:50:34 crc kubenswrapper[4731]: I1125 15:50:34.025284 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 25 15:50:34 crc kubenswrapper[4731]: I1125 15:50:34.025856 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerName="aodh-api" containerID="cri-o://4b62654d8a1456ff154280cceb518c57721d537a81d6f32ba69ffa9d9a36600b" gracePeriod=30 Nov 25 15:50:34 crc kubenswrapper[4731]: I1125 15:50:34.025928 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerName="aodh-listener" containerID="cri-o://822474fa84985c19afe3de131a6e0b0e979dd694de3ab871fca7641a86181a85" gracePeriod=30 Nov 25 15:50:34 crc kubenswrapper[4731]: I1125 15:50:34.025954 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerName="aodh-notifier" containerID="cri-o://f16595105bcde2838d43a40ead398941bfb4cb3546eb530e6906028b4f484717" gracePeriod=30 Nov 25 15:50:34 crc kubenswrapper[4731]: I1125 15:50:34.026091 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerName="aodh-evaluator" containerID="cri-o://cd90116793f6c00f619eb14840467d104a59fdfa44339b80b899dd1b4ff0ce5e" gracePeriod=30 Nov 25 15:50:34 crc kubenswrapper[4731]: I1125 15:50:34.081204 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 15:50:34 crc kubenswrapper[4731]: I1125 15:50:34.320163 4731 generic.go:334] "Generic (PLEG): container finished" podID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerID="4b62654d8a1456ff154280cceb518c57721d537a81d6f32ba69ffa9d9a36600b" exitCode=0 Nov 25 15:50:34 crc kubenswrapper[4731]: I1125 15:50:34.320337 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e68de59d-3f4f-46ce-9543-257fc725e94b","Type":"ContainerDied","Data":"4b62654d8a1456ff154280cceb518c57721d537a81d6f32ba69ffa9d9a36600b"} Nov 25 15:50:34 crc kubenswrapper[4731]: I1125 15:50:34.755369 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Nov 25 15:50:35 crc kubenswrapper[4731]: I1125 15:50:35.329328 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"37c1e8ac-9952-41e7-9572-b2112cbd8d28","Type":"ContainerStarted","Data":"2c2cd7987745529a407edcaa8aae2f7921e9ef5e3b803324a0ed9b47d4904ca3"} Nov 25 15:50:35 crc kubenswrapper[4731]: I1125 15:50:35.329636 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"37c1e8ac-9952-41e7-9572-b2112cbd8d28","Type":"ContainerStarted","Data":"8b4c62218889297be1ecb294d5cef088000d765e28116a1288ae906d4c91f812"} Nov 25 15:50:35 crc kubenswrapper[4731]: I1125 15:50:35.332104 4731 generic.go:334] "Generic (PLEG): container finished" podID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerID="cd90116793f6c00f619eb14840467d104a59fdfa44339b80b899dd1b4ff0ce5e" exitCode=0 Nov 25 15:50:35 crc kubenswrapper[4731]: I1125 15:50:35.332134 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e68de59d-3f4f-46ce-9543-257fc725e94b","Type":"ContainerDied","Data":"cd90116793f6c00f619eb14840467d104a59fdfa44339b80b899dd1b4ff0ce5e"} Nov 25 15:50:35 crc kubenswrapper[4731]: I1125 15:50:35.344236 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.344222001 podStartE2EDuration="2.344222001s" podCreationTimestamp="2025-11-25 15:50:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:50:35.343513134 +0000 UTC m=+3394.090647574" watchObservedRunningTime="2025-11-25 15:50:35.344222001 +0000 UTC m=+3394.091356431" Nov 25 15:50:35 crc kubenswrapper[4731]: I1125 15:50:35.500276 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:50:35 crc kubenswrapper[4731]: I1125 15:50:35.500604 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="524074c0-3531-4cd8-afba-5672252b6beb" containerName="prometheus" containerID="cri-o://39cbf3a418fd33984167f438bd04a8e35ba71539fcf60a55b6365ba915fea911" gracePeriod=600 Nov 25 15:50:35 crc kubenswrapper[4731]: I1125 15:50:35.500677 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="524074c0-3531-4cd8-afba-5672252b6beb" containerName="thanos-sidecar" containerID="cri-o://a6176630ef1f1e03ebaf1a0cc2a14b2aeedfbc408e26c823f160e418b17be1b6" gracePeriod=600 Nov 25 15:50:35 crc kubenswrapper[4731]: I1125 15:50:35.500686 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="524074c0-3531-4cd8-afba-5672252b6beb" containerName="config-reloader" containerID="cri-o://bf61156db68abdf4e902f22b100ec179f01c6f26b8c562b540023ae2193dcdeb" gracePeriod=600 Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.161056 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.230954 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b7922d-f0bb-4b3c-90ff-c7456e477222-combined-ca-bundle\") pod \"18b7922d-f0bb-4b3c-90ff-c7456e477222\" (UID: \"18b7922d-f0bb-4b3c-90ff-c7456e477222\") " Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.231453 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/18b7922d-f0bb-4b3c-90ff-c7456e477222-openstack-config\") pod \"18b7922d-f0bb-4b3c-90ff-c7456e477222\" (UID: \"18b7922d-f0bb-4b3c-90ff-c7456e477222\") " Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.231600 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/18b7922d-f0bb-4b3c-90ff-c7456e477222-openstack-config-secret\") pod \"18b7922d-f0bb-4b3c-90ff-c7456e477222\" (UID: \"18b7922d-f0bb-4b3c-90ff-c7456e477222\") " Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.231705 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw5s2\" (UniqueName: \"kubernetes.io/projected/18b7922d-f0bb-4b3c-90ff-c7456e477222-kube-api-access-bw5s2\") pod \"18b7922d-f0bb-4b3c-90ff-c7456e477222\" (UID: \"18b7922d-f0bb-4b3c-90ff-c7456e477222\") " Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.243444 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b7922d-f0bb-4b3c-90ff-c7456e477222-kube-api-access-bw5s2" (OuterVolumeSpecName: "kube-api-access-bw5s2") pod "18b7922d-f0bb-4b3c-90ff-c7456e477222" (UID: "18b7922d-f0bb-4b3c-90ff-c7456e477222"). InnerVolumeSpecName "kube-api-access-bw5s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.272944 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b7922d-f0bb-4b3c-90ff-c7456e477222-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "18b7922d-f0bb-4b3c-90ff-c7456e477222" (UID: "18b7922d-f0bb-4b3c-90ff-c7456e477222"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.288029 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b7922d-f0bb-4b3c-90ff-c7456e477222-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "18b7922d-f0bb-4b3c-90ff-c7456e477222" (UID: "18b7922d-f0bb-4b3c-90ff-c7456e477222"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.315540 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18b7922d-f0bb-4b3c-90ff-c7456e477222-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "18b7922d-f0bb-4b3c-90ff-c7456e477222" (UID: "18b7922d-f0bb-4b3c-90ff-c7456e477222"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.334686 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/18b7922d-f0bb-4b3c-90ff-c7456e477222-openstack-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.335117 4731 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/18b7922d-f0bb-4b3c-90ff-c7456e477222-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.335136 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw5s2\" (UniqueName: \"kubernetes.io/projected/18b7922d-f0bb-4b3c-90ff-c7456e477222-kube-api-access-bw5s2\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.335155 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/18b7922d-f0bb-4b3c-90ff-c7456e477222-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.356990 4731 generic.go:334] "Generic (PLEG): container finished" podID="524074c0-3531-4cd8-afba-5672252b6beb" containerID="a6176630ef1f1e03ebaf1a0cc2a14b2aeedfbc408e26c823f160e418b17be1b6" exitCode=0 Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.357034 4731 generic.go:334] "Generic (PLEG): container finished" podID="524074c0-3531-4cd8-afba-5672252b6beb" containerID="bf61156db68abdf4e902f22b100ec179f01c6f26b8c562b540023ae2193dcdeb" exitCode=0 Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.357049 4731 generic.go:334] "Generic (PLEG): container finished" podID="524074c0-3531-4cd8-afba-5672252b6beb" containerID="39cbf3a418fd33984167f438bd04a8e35ba71539fcf60a55b6365ba915fea911" exitCode=0 Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.357172 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"524074c0-3531-4cd8-afba-5672252b6beb","Type":"ContainerDied","Data":"a6176630ef1f1e03ebaf1a0cc2a14b2aeedfbc408e26c823f160e418b17be1b6"} Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.357211 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"524074c0-3531-4cd8-afba-5672252b6beb","Type":"ContainerDied","Data":"bf61156db68abdf4e902f22b100ec179f01c6f26b8c562b540023ae2193dcdeb"} Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.357228 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"524074c0-3531-4cd8-afba-5672252b6beb","Type":"ContainerDied","Data":"39cbf3a418fd33984167f438bd04a8e35ba71539fcf60a55b6365ba915fea911"} Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.372969 4731 generic.go:334] "Generic (PLEG): container finished" podID="18b7922d-f0bb-4b3c-90ff-c7456e477222" containerID="6cbfd4c076113c645e122fce565f8e49a906726dd90e298769f123f97d58a5e3" exitCode=137 Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.373741 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.373897 4731 scope.go:117] "RemoveContainer" containerID="6cbfd4c076113c645e122fce565f8e49a906726dd90e298769f123f97d58a5e3" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.398839 4731 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="18b7922d-f0bb-4b3c-90ff-c7456e477222" podUID="37c1e8ac-9952-41e7-9572-b2112cbd8d28" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.403309 4731 scope.go:117] "RemoveContainer" containerID="6cbfd4c076113c645e122fce565f8e49a906726dd90e298769f123f97d58a5e3" Nov 25 15:50:36 crc kubenswrapper[4731]: E1125 15:50:36.403859 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cbfd4c076113c645e122fce565f8e49a906726dd90e298769f123f97d58a5e3\": container with ID starting with 6cbfd4c076113c645e122fce565f8e49a906726dd90e298769f123f97d58a5e3 not found: ID does not exist" containerID="6cbfd4c076113c645e122fce565f8e49a906726dd90e298769f123f97d58a5e3" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.403900 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cbfd4c076113c645e122fce565f8e49a906726dd90e298769f123f97d58a5e3"} err="failed to get container status \"6cbfd4c076113c645e122fce565f8e49a906726dd90e298769f123f97d58a5e3\": rpc error: code = NotFound desc = could not find container \"6cbfd4c076113c645e122fce565f8e49a906726dd90e298769f123f97d58a5e3\": container with ID starting with 6cbfd4c076113c645e122fce565f8e49a906726dd90e298769f123f97d58a5e3 not found: ID does not exist" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.443306 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.538323 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/524074c0-3531-4cd8-afba-5672252b6beb-tls-assets\") pod \"524074c0-3531-4cd8-afba-5672252b6beb\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.538378 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/524074c0-3531-4cd8-afba-5672252b6beb-config-out\") pod \"524074c0-3531-4cd8-afba-5672252b6beb\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.538435 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfdk7\" (UniqueName: \"kubernetes.io/projected/524074c0-3531-4cd8-afba-5672252b6beb-kube-api-access-sfdk7\") pod \"524074c0-3531-4cd8-afba-5672252b6beb\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.538614 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\") pod \"524074c0-3531-4cd8-afba-5672252b6beb\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.539037 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/524074c0-3531-4cd8-afba-5672252b6beb-prometheus-metric-storage-rulefiles-0\") pod \"524074c0-3531-4cd8-afba-5672252b6beb\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.539080 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-thanos-prometheus-http-client-file\") pod \"524074c0-3531-4cd8-afba-5672252b6beb\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.539348 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-config\") pod \"524074c0-3531-4cd8-afba-5672252b6beb\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.539421 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-web-config\") pod \"524074c0-3531-4cd8-afba-5672252b6beb\" (UID: \"524074c0-3531-4cd8-afba-5672252b6beb\") " Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.539475 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/524074c0-3531-4cd8-afba-5672252b6beb-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "524074c0-3531-4cd8-afba-5672252b6beb" (UID: "524074c0-3531-4cd8-afba-5672252b6beb"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.540379 4731 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/524074c0-3531-4cd8-afba-5672252b6beb-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.544494 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/524074c0-3531-4cd8-afba-5672252b6beb-config-out" (OuterVolumeSpecName: "config-out") pod "524074c0-3531-4cd8-afba-5672252b6beb" (UID: "524074c0-3531-4cd8-afba-5672252b6beb"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.544516 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/524074c0-3531-4cd8-afba-5672252b6beb-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "524074c0-3531-4cd8-afba-5672252b6beb" (UID: "524074c0-3531-4cd8-afba-5672252b6beb"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.544583 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/524074c0-3531-4cd8-afba-5672252b6beb-kube-api-access-sfdk7" (OuterVolumeSpecName: "kube-api-access-sfdk7") pod "524074c0-3531-4cd8-afba-5672252b6beb" (UID: "524074c0-3531-4cd8-afba-5672252b6beb"). InnerVolumeSpecName "kube-api-access-sfdk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.545188 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "524074c0-3531-4cd8-afba-5672252b6beb" (UID: "524074c0-3531-4cd8-afba-5672252b6beb"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.551224 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-config" (OuterVolumeSpecName: "config") pod "524074c0-3531-4cd8-afba-5672252b6beb" (UID: "524074c0-3531-4cd8-afba-5672252b6beb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.575489 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-web-config" (OuterVolumeSpecName: "web-config") pod "524074c0-3531-4cd8-afba-5672252b6beb" (UID: "524074c0-3531-4cd8-afba-5672252b6beb"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.591283 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "524074c0-3531-4cd8-afba-5672252b6beb" (UID: "524074c0-3531-4cd8-afba-5672252b6beb"). InnerVolumeSpecName "pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.641961 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.642008 4731 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-web-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.642022 4731 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/524074c0-3531-4cd8-afba-5672252b6beb-config-out\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.642037 4731 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/524074c0-3531-4cd8-afba-5672252b6beb-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.642050 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfdk7\" (UniqueName: \"kubernetes.io/projected/524074c0-3531-4cd8-afba-5672252b6beb-kube-api-access-sfdk7\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.642098 4731 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\") on node \"crc\" " Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.642114 4731 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/524074c0-3531-4cd8-afba-5672252b6beb-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.688008 4731 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.688223 4731 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6") on node "crc" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.745509 4731 reconciler_common.go:293] "Volume detached for volume \"pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:36 crc kubenswrapper[4731]: I1125 15:50:36.834349 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b7922d-f0bb-4b3c-90ff-c7456e477222" path="/var/lib/kubelet/pods/18b7922d-f0bb-4b3c-90ff-c7456e477222/volumes" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.385759 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"524074c0-3531-4cd8-afba-5672252b6beb","Type":"ContainerDied","Data":"38f6c3db3b9ebd22e722ac91cbd98be5745703144e9d1c2b5a2efb21481a629f"} Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.386103 4731 scope.go:117] "RemoveContainer" containerID="a6176630ef1f1e03ebaf1a0cc2a14b2aeedfbc408e26c823f160e418b17be1b6" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.386296 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.396043 4731 generic.go:334] "Generic (PLEG): container finished" podID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerID="f16595105bcde2838d43a40ead398941bfb4cb3546eb530e6906028b4f484717" exitCode=0 Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.396133 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e68de59d-3f4f-46ce-9543-257fc725e94b","Type":"ContainerDied","Data":"f16595105bcde2838d43a40ead398941bfb4cb3546eb530e6906028b4f484717"} Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.423181 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.434342 4731 scope.go:117] "RemoveContainer" containerID="bf61156db68abdf4e902f22b100ec179f01c6f26b8c562b540023ae2193dcdeb" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.447553 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.469106 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:50:37 crc kubenswrapper[4731]: E1125 15:50:37.469733 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524074c0-3531-4cd8-afba-5672252b6beb" containerName="prometheus" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.469754 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="524074c0-3531-4cd8-afba-5672252b6beb" containerName="prometheus" Nov 25 15:50:37 crc kubenswrapper[4731]: E1125 15:50:37.469763 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524074c0-3531-4cd8-afba-5672252b6beb" containerName="thanos-sidecar" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.469770 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="524074c0-3531-4cd8-afba-5672252b6beb" containerName="thanos-sidecar" Nov 25 15:50:37 crc kubenswrapper[4731]: E1125 15:50:37.469784 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524074c0-3531-4cd8-afba-5672252b6beb" containerName="init-config-reloader" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.469792 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="524074c0-3531-4cd8-afba-5672252b6beb" containerName="init-config-reloader" Nov 25 15:50:37 crc kubenswrapper[4731]: E1125 15:50:37.469806 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="524074c0-3531-4cd8-afba-5672252b6beb" containerName="config-reloader" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.469831 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="524074c0-3531-4cd8-afba-5672252b6beb" containerName="config-reloader" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.470068 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="524074c0-3531-4cd8-afba-5672252b6beb" containerName="config-reloader" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.470092 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="524074c0-3531-4cd8-afba-5672252b6beb" containerName="thanos-sidecar" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.470127 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="524074c0-3531-4cd8-afba-5672252b6beb" containerName="prometheus" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.472392 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.474512 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.474774 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.475313 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.475624 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.476184 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-sf6fp" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.476184 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.486946 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.504567 4731 scope.go:117] "RemoveContainer" containerID="39cbf3a418fd33984167f438bd04a8e35ba71539fcf60a55b6365ba915fea911" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.514828 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.538081 4731 scope.go:117] "RemoveContainer" containerID="b805c426a9a46d3f1883dd6ba57a615430a95bb234937fbd3e6080aa9c94d225" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.673036 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/39039046-22c0-45c4-8791-b43bca4248c1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.673110 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.673178 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.673268 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.673303 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.673346 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39039046-22c0-45c4-8791-b43bca4248c1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.673371 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56s52\" (UniqueName: \"kubernetes.io/projected/39039046-22c0-45c4-8791-b43bca4248c1-kube-api-access-56s52\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.673431 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39039046-22c0-45c4-8791-b43bca4248c1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.673456 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.673480 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.673524 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-config\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.775275 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.775405 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.775441 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.775487 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39039046-22c0-45c4-8791-b43bca4248c1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.775511 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56s52\" (UniqueName: \"kubernetes.io/projected/39039046-22c0-45c4-8791-b43bca4248c1-kube-api-access-56s52\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.775570 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39039046-22c0-45c4-8791-b43bca4248c1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.775594 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.775619 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.775660 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-config\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.775718 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/39039046-22c0-45c4-8791-b43bca4248c1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.775747 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.777435 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/39039046-22c0-45c4-8791-b43bca4248c1-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.781216 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-config\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.781621 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39039046-22c0-45c4-8791-b43bca4248c1-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.783608 4731 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.783882 4731 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2b406d8e239e15e30a1b7a6e746a41dfccba14b3e845aa953b91df4689559c85/globalmount\"" pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.785339 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.785544 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.786048 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.786058 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39039046-22c0-45c4-8791-b43bca4248c1-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.786497 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.793791 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.799314 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56s52\" (UniqueName: \"kubernetes.io/projected/39039046-22c0-45c4-8791-b43bca4248c1-kube-api-access-56s52\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:37 crc kubenswrapper[4731]: I1125 15:50:37.824154 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\") pod \"prometheus-metric-storage-0\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.107422 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.419596 4731 generic.go:334] "Generic (PLEG): container finished" podID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerID="822474fa84985c19afe3de131a6e0b0e979dd694de3ab871fca7641a86181a85" exitCode=0 Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.420096 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e68de59d-3f4f-46ce-9543-257fc725e94b","Type":"ContainerDied","Data":"822474fa84985c19afe3de131a6e0b0e979dd694de3ab871fca7641a86181a85"} Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.420130 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e68de59d-3f4f-46ce-9543-257fc725e94b","Type":"ContainerDied","Data":"dc057c73805e1c9d5a0c4b532c9d13fcb7c6bb41f251a6901bc38e8638144838"} Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.420142 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc057c73805e1c9d5a0c4b532c9d13fcb7c6bb41f251a6901bc38e8638144838" Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.517254 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.693871 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-public-tls-certs\") pod \"e68de59d-3f4f-46ce-9543-257fc725e94b\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.693967 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-internal-tls-certs\") pod \"e68de59d-3f4f-46ce-9543-257fc725e94b\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.694096 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-config-data\") pod \"e68de59d-3f4f-46ce-9543-257fc725e94b\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.694128 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-combined-ca-bundle\") pod \"e68de59d-3f4f-46ce-9543-257fc725e94b\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.694181 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-scripts\") pod \"e68de59d-3f4f-46ce-9543-257fc725e94b\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.694224 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2dwn\" (UniqueName: \"kubernetes.io/projected/e68de59d-3f4f-46ce-9543-257fc725e94b-kube-api-access-b2dwn\") pod \"e68de59d-3f4f-46ce-9543-257fc725e94b\" (UID: \"e68de59d-3f4f-46ce-9543-257fc725e94b\") " Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.701228 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e68de59d-3f4f-46ce-9543-257fc725e94b-kube-api-access-b2dwn" (OuterVolumeSpecName: "kube-api-access-b2dwn") pod "e68de59d-3f4f-46ce-9543-257fc725e94b" (UID: "e68de59d-3f4f-46ce-9543-257fc725e94b"). InnerVolumeSpecName "kube-api-access-b2dwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.703129 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-scripts" (OuterVolumeSpecName: "scripts") pod "e68de59d-3f4f-46ce-9543-257fc725e94b" (UID: "e68de59d-3f4f-46ce-9543-257fc725e94b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:50:38 crc kubenswrapper[4731]: W1125 15:50:38.764703 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39039046_22c0_45c4_8791_b43bca4248c1.slice/crio-53186209d0cac1acbd454ed914f3859228859ead505b3314df1a690544df94c4 WatchSource:0}: Error finding container 53186209d0cac1acbd454ed914f3859228859ead505b3314df1a690544df94c4: Status 404 returned error can't find the container with id 53186209d0cac1acbd454ed914f3859228859ead505b3314df1a690544df94c4 Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.765028 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.769443 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "e68de59d-3f4f-46ce-9543-257fc725e94b" (UID: "e68de59d-3f4f-46ce-9543-257fc725e94b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.774675 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e68de59d-3f4f-46ce-9543-257fc725e94b" (UID: "e68de59d-3f4f-46ce-9543-257fc725e94b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.796637 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.796692 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2dwn\" (UniqueName: \"kubernetes.io/projected/e68de59d-3f4f-46ce-9543-257fc725e94b-kube-api-access-b2dwn\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.796709 4731 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.796720 4731 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.823096 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="524074c0-3531-4cd8-afba-5672252b6beb" path="/var/lib/kubelet/pods/524074c0-3531-4cd8-afba-5672252b6beb/volumes" Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.833129 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e68de59d-3f4f-46ce-9543-257fc725e94b" (UID: "e68de59d-3f4f-46ce-9543-257fc725e94b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.837876 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-config-data" (OuterVolumeSpecName: "config-data") pod "e68de59d-3f4f-46ce-9543-257fc725e94b" (UID: "e68de59d-3f4f-46ce-9543-257fc725e94b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.898184 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:38 crc kubenswrapper[4731]: I1125 15:50:38.898228 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e68de59d-3f4f-46ce-9543-257fc725e94b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.438376 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39039046-22c0-45c4-8791-b43bca4248c1","Type":"ContainerStarted","Data":"53186209d0cac1acbd454ed914f3859228859ead505b3314df1a690544df94c4"} Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.438461 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.474628 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.488142 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.515760 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 25 15:50:39 crc kubenswrapper[4731]: E1125 15:50:39.516335 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerName="aodh-api" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.516365 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerName="aodh-api" Nov 25 15:50:39 crc kubenswrapper[4731]: E1125 15:50:39.516390 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerName="aodh-notifier" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.516400 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerName="aodh-notifier" Nov 25 15:50:39 crc kubenswrapper[4731]: E1125 15:50:39.516419 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerName="aodh-listener" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.516427 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerName="aodh-listener" Nov 25 15:50:39 crc kubenswrapper[4731]: E1125 15:50:39.516447 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerName="aodh-evaluator" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.516456 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerName="aodh-evaluator" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.516727 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerName="aodh-notifier" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.516756 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerName="aodh-api" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.516771 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerName="aodh-evaluator" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.516804 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" containerName="aodh-listener" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.519203 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.522218 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.522520 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-h9h4w" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.523580 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.523613 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.523852 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.547240 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.718526 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjvcf\" (UniqueName: \"kubernetes.io/projected/2705ac1d-821f-47a8-b8a9-960ed06e9280-kube-api-access-vjvcf\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.718596 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-config-data\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.718636 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.718658 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-public-tls-certs\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.718714 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-scripts\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.718730 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-internal-tls-certs\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.820284 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjvcf\" (UniqueName: \"kubernetes.io/projected/2705ac1d-821f-47a8-b8a9-960ed06e9280-kube-api-access-vjvcf\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.820355 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-config-data\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.820393 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.821443 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-public-tls-certs\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.821560 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-scripts\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.821589 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-internal-tls-certs\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.829148 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-scripts\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.829466 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-public-tls-certs\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.829794 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-internal-tls-certs\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.830084 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-combined-ca-bundle\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.830423 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-config-data\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.841092 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjvcf\" (UniqueName: \"kubernetes.io/projected/2705ac1d-821f-47a8-b8a9-960ed06e9280-kube-api-access-vjvcf\") pod \"aodh-0\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " pod="openstack/aodh-0" Nov 25 15:50:39 crc kubenswrapper[4731]: I1125 15:50:39.843590 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:50:40 crc kubenswrapper[4731]: I1125 15:50:40.313237 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 15:50:40 crc kubenswrapper[4731]: I1125 15:50:40.451359 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2705ac1d-821f-47a8-b8a9-960ed06e9280","Type":"ContainerStarted","Data":"8630b9a9f93000f321ae861443692850063f20609997137abf8d5126c1d77c77"} Nov 25 15:50:40 crc kubenswrapper[4731]: I1125 15:50:40.824831 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e68de59d-3f4f-46ce-9543-257fc725e94b" path="/var/lib/kubelet/pods/e68de59d-3f4f-46ce-9543-257fc725e94b/volumes" Nov 25 15:50:41 crc kubenswrapper[4731]: I1125 15:50:41.470575 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2705ac1d-821f-47a8-b8a9-960ed06e9280","Type":"ContainerStarted","Data":"86f1b0a2a52bc4d05ce8efc9353e601c84cf463e51db9db5cc4c14d61efc05ef"} Nov 25 15:50:42 crc kubenswrapper[4731]: I1125 15:50:42.481486 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39039046-22c0-45c4-8791-b43bca4248c1","Type":"ContainerStarted","Data":"10b250f646b6ab656590a3415af922eaa95218e7cbcc86baddfa6dc285bf4125"} Nov 25 15:50:42 crc kubenswrapper[4731]: I1125 15:50:42.485140 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2705ac1d-821f-47a8-b8a9-960ed06e9280","Type":"ContainerStarted","Data":"078e3485825d039f586adcc3aadb737647de1e32f77c6ba31b1b91a2d2bfda62"} Nov 25 15:50:43 crc kubenswrapper[4731]: I1125 15:50:43.499431 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2705ac1d-821f-47a8-b8a9-960ed06e9280","Type":"ContainerStarted","Data":"5d0a5ba8f6d192a0842b3e0c14b996cf2d6b45b178b4989e094054d81f794a90"} Nov 25 15:50:43 crc kubenswrapper[4731]: I1125 15:50:43.499731 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2705ac1d-821f-47a8-b8a9-960ed06e9280","Type":"ContainerStarted","Data":"b67ddff078d4110d2b0aba391adf12a56b816a836ce1bbc1a38ea1e45b936337"} Nov 25 15:50:43 crc kubenswrapper[4731]: I1125 15:50:43.550082 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=1.6827445509999999 podStartE2EDuration="4.550014041s" podCreationTimestamp="2025-11-25 15:50:39 +0000 UTC" firstStartedPulling="2025-11-25 15:50:40.306682288 +0000 UTC m=+3399.053816708" lastFinishedPulling="2025-11-25 15:50:43.173951758 +0000 UTC m=+3401.921086198" observedRunningTime="2025-11-25 15:50:43.527602135 +0000 UTC m=+3402.274736565" watchObservedRunningTime="2025-11-25 15:50:43.550014041 +0000 UTC m=+3402.297148531" Nov 25 15:50:50 crc kubenswrapper[4731]: I1125 15:50:50.561430 4731 generic.go:334] "Generic (PLEG): container finished" podID="39039046-22c0-45c4-8791-b43bca4248c1" containerID="10b250f646b6ab656590a3415af922eaa95218e7cbcc86baddfa6dc285bf4125" exitCode=0 Nov 25 15:50:50 crc kubenswrapper[4731]: I1125 15:50:50.561503 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39039046-22c0-45c4-8791-b43bca4248c1","Type":"ContainerDied","Data":"10b250f646b6ab656590a3415af922eaa95218e7cbcc86baddfa6dc285bf4125"} Nov 25 15:50:51 crc kubenswrapper[4731]: I1125 15:50:51.582741 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39039046-22c0-45c4-8791-b43bca4248c1","Type":"ContainerStarted","Data":"0c064efd0cdfed86cfa057473b5ecbbb7134bd4b195e884f935ea178731880b8"} Nov 25 15:50:54 crc kubenswrapper[4731]: I1125 15:50:54.621436 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39039046-22c0-45c4-8791-b43bca4248c1","Type":"ContainerStarted","Data":"80aad816e98db0e0288586f2940496a1c6662e49152677205629d500d3691738"} Nov 25 15:50:54 crc kubenswrapper[4731]: I1125 15:50:54.621916 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39039046-22c0-45c4-8791-b43bca4248c1","Type":"ContainerStarted","Data":"8e3e02c911e6f94f589d0f14ce270805e76abb4e7985ed22e01e2937e52020b5"} Nov 25 15:50:54 crc kubenswrapper[4731]: I1125 15:50:54.654194 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.654174523000002 podStartE2EDuration="17.654174523s" podCreationTimestamp="2025-11-25 15:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:50:54.645921978 +0000 UTC m=+3413.393056428" watchObservedRunningTime="2025-11-25 15:50:54.654174523 +0000 UTC m=+3413.401308953" Nov 25 15:50:58 crc kubenswrapper[4731]: I1125 15:50:58.109013 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 25 15:51:08 crc kubenswrapper[4731]: I1125 15:51:08.108783 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 25 15:51:08 crc kubenswrapper[4731]: I1125 15:51:08.114781 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 25 15:51:09 crc kubenswrapper[4731]: I1125 15:51:09.019753 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 25 15:51:20 crc kubenswrapper[4731]: I1125 15:51:20.039807 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:51:20 crc kubenswrapper[4731]: I1125 15:51:20.041968 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:51:36 crc kubenswrapper[4731]: I1125 15:51:36.908429 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pbrtl"] Nov 25 15:51:36 crc kubenswrapper[4731]: I1125 15:51:36.911068 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:36 crc kubenswrapper[4731]: I1125 15:51:36.933884 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pbrtl"] Nov 25 15:51:37 crc kubenswrapper[4731]: I1125 15:51:37.044666 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f297be3-c446-43f7-a6f1-15f8bacb7706-utilities\") pod \"certified-operators-pbrtl\" (UID: \"8f297be3-c446-43f7-a6f1-15f8bacb7706\") " pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:37 crc kubenswrapper[4731]: I1125 15:51:37.044840 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f297be3-c446-43f7-a6f1-15f8bacb7706-catalog-content\") pod \"certified-operators-pbrtl\" (UID: \"8f297be3-c446-43f7-a6f1-15f8bacb7706\") " pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:37 crc kubenswrapper[4731]: I1125 15:51:37.045374 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx48f\" (UniqueName: \"kubernetes.io/projected/8f297be3-c446-43f7-a6f1-15f8bacb7706-kube-api-access-bx48f\") pod \"certified-operators-pbrtl\" (UID: \"8f297be3-c446-43f7-a6f1-15f8bacb7706\") " pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:37 crc kubenswrapper[4731]: I1125 15:51:37.147146 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx48f\" (UniqueName: \"kubernetes.io/projected/8f297be3-c446-43f7-a6f1-15f8bacb7706-kube-api-access-bx48f\") pod \"certified-operators-pbrtl\" (UID: \"8f297be3-c446-43f7-a6f1-15f8bacb7706\") " pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:37 crc kubenswrapper[4731]: I1125 15:51:37.147207 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f297be3-c446-43f7-a6f1-15f8bacb7706-utilities\") pod \"certified-operators-pbrtl\" (UID: \"8f297be3-c446-43f7-a6f1-15f8bacb7706\") " pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:37 crc kubenswrapper[4731]: I1125 15:51:37.147240 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f297be3-c446-43f7-a6f1-15f8bacb7706-catalog-content\") pod \"certified-operators-pbrtl\" (UID: \"8f297be3-c446-43f7-a6f1-15f8bacb7706\") " pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:37 crc kubenswrapper[4731]: I1125 15:51:37.147845 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f297be3-c446-43f7-a6f1-15f8bacb7706-utilities\") pod \"certified-operators-pbrtl\" (UID: \"8f297be3-c446-43f7-a6f1-15f8bacb7706\") " pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:37 crc kubenswrapper[4731]: I1125 15:51:37.147899 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f297be3-c446-43f7-a6f1-15f8bacb7706-catalog-content\") pod \"certified-operators-pbrtl\" (UID: \"8f297be3-c446-43f7-a6f1-15f8bacb7706\") " pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:37 crc kubenswrapper[4731]: I1125 15:51:37.173852 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx48f\" (UniqueName: \"kubernetes.io/projected/8f297be3-c446-43f7-a6f1-15f8bacb7706-kube-api-access-bx48f\") pod \"certified-operators-pbrtl\" (UID: \"8f297be3-c446-43f7-a6f1-15f8bacb7706\") " pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:37 crc kubenswrapper[4731]: I1125 15:51:37.237048 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:37 crc kubenswrapper[4731]: I1125 15:51:37.782448 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pbrtl"] Nov 25 15:51:38 crc kubenswrapper[4731]: I1125 15:51:38.341106 4731 generic.go:334] "Generic (PLEG): container finished" podID="8f297be3-c446-43f7-a6f1-15f8bacb7706" containerID="44ef2d09419e7d89bc40ab5c3549f98a30b8bd79a0cfc713dd76897bbf7edbf6" exitCode=0 Nov 25 15:51:38 crc kubenswrapper[4731]: I1125 15:51:38.341149 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbrtl" event={"ID":"8f297be3-c446-43f7-a6f1-15f8bacb7706","Type":"ContainerDied","Data":"44ef2d09419e7d89bc40ab5c3549f98a30b8bd79a0cfc713dd76897bbf7edbf6"} Nov 25 15:51:38 crc kubenswrapper[4731]: I1125 15:51:38.341174 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbrtl" event={"ID":"8f297be3-c446-43f7-a6f1-15f8bacb7706","Type":"ContainerStarted","Data":"ff5822bbd3c5e5f7de8841d460685fff3a5e4703e1913efd82db83745afcf9f0"} Nov 25 15:51:39 crc kubenswrapper[4731]: I1125 15:51:39.359945 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbrtl" event={"ID":"8f297be3-c446-43f7-a6f1-15f8bacb7706","Type":"ContainerStarted","Data":"d0cb96f30a9690b37ef8c3c8586115bfb311f9ddf376ac7f6c6ea263a5abe7c9"} Nov 25 15:51:40 crc kubenswrapper[4731]: I1125 15:51:40.371193 4731 generic.go:334] "Generic (PLEG): container finished" podID="8f297be3-c446-43f7-a6f1-15f8bacb7706" containerID="d0cb96f30a9690b37ef8c3c8586115bfb311f9ddf376ac7f6c6ea263a5abe7c9" exitCode=0 Nov 25 15:51:40 crc kubenswrapper[4731]: I1125 15:51:40.371301 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbrtl" event={"ID":"8f297be3-c446-43f7-a6f1-15f8bacb7706","Type":"ContainerDied","Data":"d0cb96f30a9690b37ef8c3c8586115bfb311f9ddf376ac7f6c6ea263a5abe7c9"} Nov 25 15:51:41 crc kubenswrapper[4731]: I1125 15:51:41.386187 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbrtl" event={"ID":"8f297be3-c446-43f7-a6f1-15f8bacb7706","Type":"ContainerStarted","Data":"4f951c200389cab9175733ba347034c2d4ab636e236c4f719f91fa2b8bf93dc8"} Nov 25 15:51:41 crc kubenswrapper[4731]: I1125 15:51:41.410830 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pbrtl" podStartSLOduration=2.955926993 podStartE2EDuration="5.410792186s" podCreationTimestamp="2025-11-25 15:51:36 +0000 UTC" firstStartedPulling="2025-11-25 15:51:38.342655706 +0000 UTC m=+3457.089790136" lastFinishedPulling="2025-11-25 15:51:40.797520859 +0000 UTC m=+3459.544655329" observedRunningTime="2025-11-25 15:51:41.406002907 +0000 UTC m=+3460.153137347" watchObservedRunningTime="2025-11-25 15:51:41.410792186 +0000 UTC m=+3460.157926616" Nov 25 15:51:47 crc kubenswrapper[4731]: I1125 15:51:47.237503 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:47 crc kubenswrapper[4731]: I1125 15:51:47.238403 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:47 crc kubenswrapper[4731]: I1125 15:51:47.282622 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:47 crc kubenswrapper[4731]: I1125 15:51:47.510030 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:47 crc kubenswrapper[4731]: I1125 15:51:47.568613 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pbrtl"] Nov 25 15:51:49 crc kubenswrapper[4731]: I1125 15:51:49.481883 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pbrtl" podUID="8f297be3-c446-43f7-a6f1-15f8bacb7706" containerName="registry-server" containerID="cri-o://4f951c200389cab9175733ba347034c2d4ab636e236c4f719f91fa2b8bf93dc8" gracePeriod=2 Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.035929 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.040348 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.040428 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.157869 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f297be3-c446-43f7-a6f1-15f8bacb7706-catalog-content\") pod \"8f297be3-c446-43f7-a6f1-15f8bacb7706\" (UID: \"8f297be3-c446-43f7-a6f1-15f8bacb7706\") " Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.157937 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f297be3-c446-43f7-a6f1-15f8bacb7706-utilities\") pod \"8f297be3-c446-43f7-a6f1-15f8bacb7706\" (UID: \"8f297be3-c446-43f7-a6f1-15f8bacb7706\") " Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.158023 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx48f\" (UniqueName: \"kubernetes.io/projected/8f297be3-c446-43f7-a6f1-15f8bacb7706-kube-api-access-bx48f\") pod \"8f297be3-c446-43f7-a6f1-15f8bacb7706\" (UID: \"8f297be3-c446-43f7-a6f1-15f8bacb7706\") " Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.159752 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f297be3-c446-43f7-a6f1-15f8bacb7706-utilities" (OuterVolumeSpecName: "utilities") pod "8f297be3-c446-43f7-a6f1-15f8bacb7706" (UID: "8f297be3-c446-43f7-a6f1-15f8bacb7706"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.164075 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f297be3-c446-43f7-a6f1-15f8bacb7706-kube-api-access-bx48f" (OuterVolumeSpecName: "kube-api-access-bx48f") pod "8f297be3-c446-43f7-a6f1-15f8bacb7706" (UID: "8f297be3-c446-43f7-a6f1-15f8bacb7706"). InnerVolumeSpecName "kube-api-access-bx48f". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.210166 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f297be3-c446-43f7-a6f1-15f8bacb7706-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8f297be3-c446-43f7-a6f1-15f8bacb7706" (UID: "8f297be3-c446-43f7-a6f1-15f8bacb7706"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.259928 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8f297be3-c446-43f7-a6f1-15f8bacb7706-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.259960 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8f297be3-c446-43f7-a6f1-15f8bacb7706-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.259970 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx48f\" (UniqueName: \"kubernetes.io/projected/8f297be3-c446-43f7-a6f1-15f8bacb7706-kube-api-access-bx48f\") on node \"crc\" DevicePath \"\"" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.493180 4731 generic.go:334] "Generic (PLEG): container finished" podID="8f297be3-c446-43f7-a6f1-15f8bacb7706" containerID="4f951c200389cab9175733ba347034c2d4ab636e236c4f719f91fa2b8bf93dc8" exitCode=0 Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.493258 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbrtl" event={"ID":"8f297be3-c446-43f7-a6f1-15f8bacb7706","Type":"ContainerDied","Data":"4f951c200389cab9175733ba347034c2d4ab636e236c4f719f91fa2b8bf93dc8"} Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.494005 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pbrtl" event={"ID":"8f297be3-c446-43f7-a6f1-15f8bacb7706","Type":"ContainerDied","Data":"ff5822bbd3c5e5f7de8841d460685fff3a5e4703e1913efd82db83745afcf9f0"} Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.493337 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pbrtl" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.494080 4731 scope.go:117] "RemoveContainer" containerID="4f951c200389cab9175733ba347034c2d4ab636e236c4f719f91fa2b8bf93dc8" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.517105 4731 scope.go:117] "RemoveContainer" containerID="d0cb96f30a9690b37ef8c3c8586115bfb311f9ddf376ac7f6c6ea263a5abe7c9" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.539467 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pbrtl"] Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.547227 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pbrtl"] Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.562067 4731 scope.go:117] "RemoveContainer" containerID="44ef2d09419e7d89bc40ab5c3549f98a30b8bd79a0cfc713dd76897bbf7edbf6" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.608260 4731 scope.go:117] "RemoveContainer" containerID="4f951c200389cab9175733ba347034c2d4ab636e236c4f719f91fa2b8bf93dc8" Nov 25 15:51:50 crc kubenswrapper[4731]: E1125 15:51:50.608695 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f951c200389cab9175733ba347034c2d4ab636e236c4f719f91fa2b8bf93dc8\": container with ID starting with 4f951c200389cab9175733ba347034c2d4ab636e236c4f719f91fa2b8bf93dc8 not found: ID does not exist" containerID="4f951c200389cab9175733ba347034c2d4ab636e236c4f719f91fa2b8bf93dc8" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.608801 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f951c200389cab9175733ba347034c2d4ab636e236c4f719f91fa2b8bf93dc8"} err="failed to get container status \"4f951c200389cab9175733ba347034c2d4ab636e236c4f719f91fa2b8bf93dc8\": rpc error: code = NotFound desc = could not find container \"4f951c200389cab9175733ba347034c2d4ab636e236c4f719f91fa2b8bf93dc8\": container with ID starting with 4f951c200389cab9175733ba347034c2d4ab636e236c4f719f91fa2b8bf93dc8 not found: ID does not exist" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.608902 4731 scope.go:117] "RemoveContainer" containerID="d0cb96f30a9690b37ef8c3c8586115bfb311f9ddf376ac7f6c6ea263a5abe7c9" Nov 25 15:51:50 crc kubenswrapper[4731]: E1125 15:51:50.609353 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0cb96f30a9690b37ef8c3c8586115bfb311f9ddf376ac7f6c6ea263a5abe7c9\": container with ID starting with d0cb96f30a9690b37ef8c3c8586115bfb311f9ddf376ac7f6c6ea263a5abe7c9 not found: ID does not exist" containerID="d0cb96f30a9690b37ef8c3c8586115bfb311f9ddf376ac7f6c6ea263a5abe7c9" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.609452 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0cb96f30a9690b37ef8c3c8586115bfb311f9ddf376ac7f6c6ea263a5abe7c9"} err="failed to get container status \"d0cb96f30a9690b37ef8c3c8586115bfb311f9ddf376ac7f6c6ea263a5abe7c9\": rpc error: code = NotFound desc = could not find container \"d0cb96f30a9690b37ef8c3c8586115bfb311f9ddf376ac7f6c6ea263a5abe7c9\": container with ID starting with d0cb96f30a9690b37ef8c3c8586115bfb311f9ddf376ac7f6c6ea263a5abe7c9 not found: ID does not exist" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.609514 4731 scope.go:117] "RemoveContainer" containerID="44ef2d09419e7d89bc40ab5c3549f98a30b8bd79a0cfc713dd76897bbf7edbf6" Nov 25 15:51:50 crc kubenswrapper[4731]: E1125 15:51:50.609777 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44ef2d09419e7d89bc40ab5c3549f98a30b8bd79a0cfc713dd76897bbf7edbf6\": container with ID starting with 44ef2d09419e7d89bc40ab5c3549f98a30b8bd79a0cfc713dd76897bbf7edbf6 not found: ID does not exist" containerID="44ef2d09419e7d89bc40ab5c3549f98a30b8bd79a0cfc713dd76897bbf7edbf6" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.609880 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44ef2d09419e7d89bc40ab5c3549f98a30b8bd79a0cfc713dd76897bbf7edbf6"} err="failed to get container status \"44ef2d09419e7d89bc40ab5c3549f98a30b8bd79a0cfc713dd76897bbf7edbf6\": rpc error: code = NotFound desc = could not find container \"44ef2d09419e7d89bc40ab5c3549f98a30b8bd79a0cfc713dd76897bbf7edbf6\": container with ID starting with 44ef2d09419e7d89bc40ab5c3549f98a30b8bd79a0cfc713dd76897bbf7edbf6 not found: ID does not exist" Nov 25 15:51:50 crc kubenswrapper[4731]: I1125 15:51:50.822225 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f297be3-c446-43f7-a6f1-15f8bacb7706" path="/var/lib/kubelet/pods/8f297be3-c446-43f7-a6f1-15f8bacb7706/volumes" Nov 25 15:52:06 crc kubenswrapper[4731]: I1125 15:52:06.089027 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-58bb8d67cc-2npjw" podUID="1af6726c-a2ca-43a1-b932-cddcd28457d1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:52:06 crc kubenswrapper[4731]: I1125 15:52:06.144426 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-7c57c8bbc4-j2wt5" podUID="f5ea628e-44a7-499c-a433-14d18a9bb804" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Nov 25 15:52:20 crc kubenswrapper[4731]: I1125 15:52:20.040149 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:52:20 crc kubenswrapper[4731]: I1125 15:52:20.041203 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:52:20 crc kubenswrapper[4731]: I1125 15:52:20.041266 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 15:52:20 crc kubenswrapper[4731]: I1125 15:52:20.042164 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2f9e4670024f055588fd4f90acc9b51e64b4b08f50f884f4bf445bf1c3e24ba"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:52:20 crc kubenswrapper[4731]: I1125 15:52:20.042237 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://b2f9e4670024f055588fd4f90acc9b51e64b4b08f50f884f4bf445bf1c3e24ba" gracePeriod=600 Nov 25 15:52:20 crc kubenswrapper[4731]: I1125 15:52:20.827854 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="b2f9e4670024f055588fd4f90acc9b51e64b4b08f50f884f4bf445bf1c3e24ba" exitCode=0 Nov 25 15:52:20 crc kubenswrapper[4731]: I1125 15:52:20.827932 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"b2f9e4670024f055588fd4f90acc9b51e64b4b08f50f884f4bf445bf1c3e24ba"} Nov 25 15:52:20 crc kubenswrapper[4731]: I1125 15:52:20.828186 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00"} Nov 25 15:52:20 crc kubenswrapper[4731]: I1125 15:52:20.828204 4731 scope.go:117] "RemoveContainer" containerID="1dad87d118d0a896c1a0def00217f27cb18c4fa5dacc821d35e4423ca8961249" Nov 25 15:53:13 crc kubenswrapper[4731]: I1125 15:53:13.778505 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-75cd8d4b6c-kkqgj_089e8703-f71e-4f81-91af-19224ba7b1b4/manager/0.log" Nov 25 15:53:16 crc kubenswrapper[4731]: I1125 15:53:16.249605 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:53:16 crc kubenswrapper[4731]: I1125 15:53:16.251145 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="39039046-22c0-45c4-8791-b43bca4248c1" containerName="prometheus" containerID="cri-o://0c064efd0cdfed86cfa057473b5ecbbb7134bd4b195e884f935ea178731880b8" gracePeriod=600 Nov 25 15:53:16 crc kubenswrapper[4731]: I1125 15:53:16.251500 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="39039046-22c0-45c4-8791-b43bca4248c1" containerName="thanos-sidecar" containerID="cri-o://80aad816e98db0e0288586f2940496a1c6662e49152677205629d500d3691738" gracePeriod=600 Nov 25 15:53:16 crc kubenswrapper[4731]: I1125 15:53:16.251557 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="39039046-22c0-45c4-8791-b43bca4248c1" containerName="config-reloader" containerID="cri-o://8e3e02c911e6f94f589d0f14ce270805e76abb4e7985ed22e01e2937e52020b5" gracePeriod=600 Nov 25 15:53:16 crc kubenswrapper[4731]: I1125 15:53:16.450424 4731 generic.go:334] "Generic (PLEG): container finished" podID="39039046-22c0-45c4-8791-b43bca4248c1" containerID="80aad816e98db0e0288586f2940496a1c6662e49152677205629d500d3691738" exitCode=0 Nov 25 15:53:16 crc kubenswrapper[4731]: I1125 15:53:16.450458 4731 generic.go:334] "Generic (PLEG): container finished" podID="39039046-22c0-45c4-8791-b43bca4248c1" containerID="0c064efd0cdfed86cfa057473b5ecbbb7134bd4b195e884f935ea178731880b8" exitCode=0 Nov 25 15:53:16 crc kubenswrapper[4731]: I1125 15:53:16.450506 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39039046-22c0-45c4-8791-b43bca4248c1","Type":"ContainerDied","Data":"80aad816e98db0e0288586f2940496a1c6662e49152677205629d500d3691738"} Nov 25 15:53:16 crc kubenswrapper[4731]: I1125 15:53:16.450587 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39039046-22c0-45c4-8791-b43bca4248c1","Type":"ContainerDied","Data":"0c064efd0cdfed86cfa057473b5ecbbb7134bd4b195e884f935ea178731880b8"} Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.239585 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.323661 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-secret-combined-ca-bundle\") pod \"39039046-22c0-45c4-8791-b43bca4248c1\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.323730 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39039046-22c0-45c4-8791-b43bca4248c1-tls-assets\") pod \"39039046-22c0-45c4-8791-b43bca4248c1\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.323784 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56s52\" (UniqueName: \"kubernetes.io/projected/39039046-22c0-45c4-8791-b43bca4248c1-kube-api-access-56s52\") pod \"39039046-22c0-45c4-8791-b43bca4248c1\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.323882 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-config\") pod \"39039046-22c0-45c4-8791-b43bca4248c1\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.323977 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/39039046-22c0-45c4-8791-b43bca4248c1-prometheus-metric-storage-rulefiles-0\") pod \"39039046-22c0-45c4-8791-b43bca4248c1\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.324010 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-thanos-prometheus-http-client-file\") pod \"39039046-22c0-45c4-8791-b43bca4248c1\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.324052 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39039046-22c0-45c4-8791-b43bca4248c1-config-out\") pod \"39039046-22c0-45c4-8791-b43bca4248c1\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.324125 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"39039046-22c0-45c4-8791-b43bca4248c1\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.324174 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config\") pod \"39039046-22c0-45c4-8791-b43bca4248c1\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.324306 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\") pod \"39039046-22c0-45c4-8791-b43bca4248c1\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.324402 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"39039046-22c0-45c4-8791-b43bca4248c1\" (UID: \"39039046-22c0-45c4-8791-b43bca4248c1\") " Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.325066 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39039046-22c0-45c4-8791-b43bca4248c1-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "39039046-22c0-45c4-8791-b43bca4248c1" (UID: "39039046-22c0-45c4-8791-b43bca4248c1"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.331618 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "39039046-22c0-45c4-8791-b43bca4248c1" (UID: "39039046-22c0-45c4-8791-b43bca4248c1"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.333076 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39039046-22c0-45c4-8791-b43bca4248c1-config-out" (OuterVolumeSpecName: "config-out") pod "39039046-22c0-45c4-8791-b43bca4248c1" (UID: "39039046-22c0-45c4-8791-b43bca4248c1"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.333136 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "39039046-22c0-45c4-8791-b43bca4248c1" (UID: "39039046-22c0-45c4-8791-b43bca4248c1"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.333131 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39039046-22c0-45c4-8791-b43bca4248c1-kube-api-access-56s52" (OuterVolumeSpecName: "kube-api-access-56s52") pod "39039046-22c0-45c4-8791-b43bca4248c1" (UID: "39039046-22c0-45c4-8791-b43bca4248c1"). InnerVolumeSpecName "kube-api-access-56s52". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.334601 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "39039046-22c0-45c4-8791-b43bca4248c1" (UID: "39039046-22c0-45c4-8791-b43bca4248c1"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.334669 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "39039046-22c0-45c4-8791-b43bca4248c1" (UID: "39039046-22c0-45c4-8791-b43bca4248c1"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.336761 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-config" (OuterVolumeSpecName: "config") pod "39039046-22c0-45c4-8791-b43bca4248c1" (UID: "39039046-22c0-45c4-8791-b43bca4248c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.341580 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39039046-22c0-45c4-8791-b43bca4248c1-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "39039046-22c0-45c4-8791-b43bca4248c1" (UID: "39039046-22c0-45c4-8791-b43bca4248c1"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.357878 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "39039046-22c0-45c4-8791-b43bca4248c1" (UID: "39039046-22c0-45c4-8791-b43bca4248c1"). InnerVolumeSpecName "pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6". PluginName "kubernetes.io/csi", VolumeGidValue "" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.417401 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config" (OuterVolumeSpecName: "web-config") pod "39039046-22c0-45c4-8791-b43bca4248c1" (UID: "39039046-22c0-45c4-8791-b43bca4248c1"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.426935 4731 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.427036 4731 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.427053 4731 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/39039046-22c0-45c4-8791-b43bca4248c1-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.427066 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56s52\" (UniqueName: \"kubernetes.io/projected/39039046-22c0-45c4-8791-b43bca4248c1-kube-api-access-56s52\") on node \"crc\" DevicePath \"\"" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.427078 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.427091 4731 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/39039046-22c0-45c4-8791-b43bca4248c1-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.427105 4731 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.427116 4731 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/39039046-22c0-45c4-8791-b43bca4248c1-config-out\") on node \"crc\" DevicePath \"\"" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.427129 4731 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.427162 4731 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/39039046-22c0-45c4-8791-b43bca4248c1-web-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.427203 4731 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\") on node \"crc\" " Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.452358 4731 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.452548 4731 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6") on node "crc" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.464399 4731 generic.go:334] "Generic (PLEG): container finished" podID="39039046-22c0-45c4-8791-b43bca4248c1" containerID="8e3e02c911e6f94f589d0f14ce270805e76abb4e7985ed22e01e2937e52020b5" exitCode=0 Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.464447 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39039046-22c0-45c4-8791-b43bca4248c1","Type":"ContainerDied","Data":"8e3e02c911e6f94f589d0f14ce270805e76abb4e7985ed22e01e2937e52020b5"} Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.464477 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"39039046-22c0-45c4-8791-b43bca4248c1","Type":"ContainerDied","Data":"53186209d0cac1acbd454ed914f3859228859ead505b3314df1a690544df94c4"} Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.464479 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.464494 4731 scope.go:117] "RemoveContainer" containerID="80aad816e98db0e0288586f2940496a1c6662e49152677205629d500d3691738" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.514250 4731 scope.go:117] "RemoveContainer" containerID="8e3e02c911e6f94f589d0f14ce270805e76abb4e7985ed22e01e2937e52020b5" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.527562 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.529516 4731 reconciler_common.go:293] "Volume detached for volume \"pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4ca34f82-5bd3-4709-8411-a164d58bd4b6\") on node \"crc\" DevicePath \"\"" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.541146 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.549505 4731 scope.go:117] "RemoveContainer" containerID="0c064efd0cdfed86cfa057473b5ecbbb7134bd4b195e884f935ea178731880b8" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.572323 4731 scope.go:117] "RemoveContainer" containerID="10b250f646b6ab656590a3415af922eaa95218e7cbcc86baddfa6dc285bf4125" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.595763 4731 scope.go:117] "RemoveContainer" containerID="80aad816e98db0e0288586f2940496a1c6662e49152677205629d500d3691738" Nov 25 15:53:17 crc kubenswrapper[4731]: E1125 15:53:17.596214 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80aad816e98db0e0288586f2940496a1c6662e49152677205629d500d3691738\": container with ID starting with 80aad816e98db0e0288586f2940496a1c6662e49152677205629d500d3691738 not found: ID does not exist" containerID="80aad816e98db0e0288586f2940496a1c6662e49152677205629d500d3691738" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.596253 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80aad816e98db0e0288586f2940496a1c6662e49152677205629d500d3691738"} err="failed to get container status \"80aad816e98db0e0288586f2940496a1c6662e49152677205629d500d3691738\": rpc error: code = NotFound desc = could not find container \"80aad816e98db0e0288586f2940496a1c6662e49152677205629d500d3691738\": container with ID starting with 80aad816e98db0e0288586f2940496a1c6662e49152677205629d500d3691738 not found: ID does not exist" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.596276 4731 scope.go:117] "RemoveContainer" containerID="8e3e02c911e6f94f589d0f14ce270805e76abb4e7985ed22e01e2937e52020b5" Nov 25 15:53:17 crc kubenswrapper[4731]: E1125 15:53:17.596645 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e3e02c911e6f94f589d0f14ce270805e76abb4e7985ed22e01e2937e52020b5\": container with ID starting with 8e3e02c911e6f94f589d0f14ce270805e76abb4e7985ed22e01e2937e52020b5 not found: ID does not exist" containerID="8e3e02c911e6f94f589d0f14ce270805e76abb4e7985ed22e01e2937e52020b5" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.596694 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e3e02c911e6f94f589d0f14ce270805e76abb4e7985ed22e01e2937e52020b5"} err="failed to get container status \"8e3e02c911e6f94f589d0f14ce270805e76abb4e7985ed22e01e2937e52020b5\": rpc error: code = NotFound desc = could not find container \"8e3e02c911e6f94f589d0f14ce270805e76abb4e7985ed22e01e2937e52020b5\": container with ID starting with 8e3e02c911e6f94f589d0f14ce270805e76abb4e7985ed22e01e2937e52020b5 not found: ID does not exist" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.596727 4731 scope.go:117] "RemoveContainer" containerID="0c064efd0cdfed86cfa057473b5ecbbb7134bd4b195e884f935ea178731880b8" Nov 25 15:53:17 crc kubenswrapper[4731]: E1125 15:53:17.597000 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c064efd0cdfed86cfa057473b5ecbbb7134bd4b195e884f935ea178731880b8\": container with ID starting with 0c064efd0cdfed86cfa057473b5ecbbb7134bd4b195e884f935ea178731880b8 not found: ID does not exist" containerID="0c064efd0cdfed86cfa057473b5ecbbb7134bd4b195e884f935ea178731880b8" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.597024 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c064efd0cdfed86cfa057473b5ecbbb7134bd4b195e884f935ea178731880b8"} err="failed to get container status \"0c064efd0cdfed86cfa057473b5ecbbb7134bd4b195e884f935ea178731880b8\": rpc error: code = NotFound desc = could not find container \"0c064efd0cdfed86cfa057473b5ecbbb7134bd4b195e884f935ea178731880b8\": container with ID starting with 0c064efd0cdfed86cfa057473b5ecbbb7134bd4b195e884f935ea178731880b8 not found: ID does not exist" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.597038 4731 scope.go:117] "RemoveContainer" containerID="10b250f646b6ab656590a3415af922eaa95218e7cbcc86baddfa6dc285bf4125" Nov 25 15:53:17 crc kubenswrapper[4731]: E1125 15:53:17.597226 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10b250f646b6ab656590a3415af922eaa95218e7cbcc86baddfa6dc285bf4125\": container with ID starting with 10b250f646b6ab656590a3415af922eaa95218e7cbcc86baddfa6dc285bf4125 not found: ID does not exist" containerID="10b250f646b6ab656590a3415af922eaa95218e7cbcc86baddfa6dc285bf4125" Nov 25 15:53:17 crc kubenswrapper[4731]: I1125 15:53:17.597257 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10b250f646b6ab656590a3415af922eaa95218e7cbcc86baddfa6dc285bf4125"} err="failed to get container status \"10b250f646b6ab656590a3415af922eaa95218e7cbcc86baddfa6dc285bf4125\": rpc error: code = NotFound desc = could not find container \"10b250f646b6ab656590a3415af922eaa95218e7cbcc86baddfa6dc285bf4125\": container with ID starting with 10b250f646b6ab656590a3415af922eaa95218e7cbcc86baddfa6dc285bf4125 not found: ID does not exist" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.440263 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:53:18 crc kubenswrapper[4731]: E1125 15:53:18.441117 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f297be3-c446-43f7-a6f1-15f8bacb7706" containerName="extract-utilities" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.441138 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f297be3-c446-43f7-a6f1-15f8bacb7706" containerName="extract-utilities" Nov 25 15:53:18 crc kubenswrapper[4731]: E1125 15:53:18.441167 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39039046-22c0-45c4-8791-b43bca4248c1" containerName="prometheus" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.441178 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="39039046-22c0-45c4-8791-b43bca4248c1" containerName="prometheus" Nov 25 15:53:18 crc kubenswrapper[4731]: E1125 15:53:18.441193 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f297be3-c446-43f7-a6f1-15f8bacb7706" containerName="registry-server" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.441203 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f297be3-c446-43f7-a6f1-15f8bacb7706" containerName="registry-server" Nov 25 15:53:18 crc kubenswrapper[4731]: E1125 15:53:18.441232 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39039046-22c0-45c4-8791-b43bca4248c1" containerName="config-reloader" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.441243 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="39039046-22c0-45c4-8791-b43bca4248c1" containerName="config-reloader" Nov 25 15:53:18 crc kubenswrapper[4731]: E1125 15:53:18.441262 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39039046-22c0-45c4-8791-b43bca4248c1" containerName="init-config-reloader" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.441271 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="39039046-22c0-45c4-8791-b43bca4248c1" containerName="init-config-reloader" Nov 25 15:53:18 crc kubenswrapper[4731]: E1125 15:53:18.441289 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39039046-22c0-45c4-8791-b43bca4248c1" containerName="thanos-sidecar" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.441298 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="39039046-22c0-45c4-8791-b43bca4248c1" containerName="thanos-sidecar" Nov 25 15:53:18 crc kubenswrapper[4731]: E1125 15:53:18.441313 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f297be3-c446-43f7-a6f1-15f8bacb7706" containerName="extract-content" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.441322 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f297be3-c446-43f7-a6f1-15f8bacb7706" containerName="extract-content" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.441595 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f297be3-c446-43f7-a6f1-15f8bacb7706" containerName="registry-server" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.441616 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="39039046-22c0-45c4-8791-b43bca4248c1" containerName="config-reloader" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.441634 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="39039046-22c0-45c4-8791-b43bca4248c1" containerName="thanos-sidecar" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.441654 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="39039046-22c0-45c4-8791-b43bca4248c1" containerName="prometheus" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.444122 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.447000 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.447281 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.447324 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.448018 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.448770 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.454153 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-sf6fp" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.463303 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.472933 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.548052 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d943fa9-bb0e-4385-a412-8e982cd59e96-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.548126 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9d943fa9-bb0e-4385-a412-8e982cd59e96-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.548164 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d943fa9-bb0e-4385-a412-8e982cd59e96-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.548188 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.548212 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.548236 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tttqb\" (UniqueName: \"kubernetes.io/projected/9d943fa9-bb0e-4385-a412-8e982cd59e96-kube-api-access-tttqb\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.548253 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.548312 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.548334 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d943fa9-bb0e-4385-a412-8e982cd59e96-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.548371 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.548454 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.650581 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.650720 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d943fa9-bb0e-4385-a412-8e982cd59e96-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.650780 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9d943fa9-bb0e-4385-a412-8e982cd59e96-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.650843 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d943fa9-bb0e-4385-a412-8e982cd59e96-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.650872 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.650901 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.650934 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tttqb\" (UniqueName: \"kubernetes.io/projected/9d943fa9-bb0e-4385-a412-8e982cd59e96-kube-api-access-tttqb\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.650959 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.651012 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.651037 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d943fa9-bb0e-4385-a412-8e982cd59e96-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.651089 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.651375 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9d943fa9-bb0e-4385-a412-8e982cd59e96-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.652153 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d943fa9-bb0e-4385-a412-8e982cd59e96-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.657714 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.657772 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.658377 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.658732 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d943fa9-bb0e-4385-a412-8e982cd59e96-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.660261 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.660809 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.661349 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.664307 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d943fa9-bb0e-4385-a412-8e982cd59e96-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.676430 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tttqb\" (UniqueName: \"kubernetes.io/projected/9d943fa9-bb0e-4385-a412-8e982cd59e96-kube-api-access-tttqb\") pod \"prometheus-metric-storage-0\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.815385 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:18 crc kubenswrapper[4731]: I1125 15:53:18.830417 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39039046-22c0-45c4-8791-b43bca4248c1" path="/var/lib/kubelet/pods/39039046-22c0-45c4-8791-b43bca4248c1/volumes" Nov 25 15:53:19 crc kubenswrapper[4731]: I1125 15:53:19.303559 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:53:19 crc kubenswrapper[4731]: I1125 15:53:19.488775 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d943fa9-bb0e-4385-a412-8e982cd59e96","Type":"ContainerStarted","Data":"8c606fcdd89e6f84e02053adc10c16ad4c3b391466fe0d0b627d20c4c6c47d62"} Nov 25 15:53:23 crc kubenswrapper[4731]: I1125 15:53:23.534668 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d943fa9-bb0e-4385-a412-8e982cd59e96","Type":"ContainerStarted","Data":"eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4"} Nov 25 15:53:30 crc kubenswrapper[4731]: I1125 15:53:30.606116 4731 generic.go:334] "Generic (PLEG): container finished" podID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerID="eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4" exitCode=0 Nov 25 15:53:30 crc kubenswrapper[4731]: I1125 15:53:30.606222 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d943fa9-bb0e-4385-a412-8e982cd59e96","Type":"ContainerDied","Data":"eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4"} Nov 25 15:53:31 crc kubenswrapper[4731]: I1125 15:53:31.625474 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d943fa9-bb0e-4385-a412-8e982cd59e96","Type":"ContainerStarted","Data":"f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a"} Nov 25 15:53:35 crc kubenswrapper[4731]: I1125 15:53:35.670168 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d943fa9-bb0e-4385-a412-8e982cd59e96","Type":"ContainerStarted","Data":"c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a"} Nov 25 15:53:35 crc kubenswrapper[4731]: I1125 15:53:35.670782 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d943fa9-bb0e-4385-a412-8e982cd59e96","Type":"ContainerStarted","Data":"e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9"} Nov 25 15:53:35 crc kubenswrapper[4731]: I1125 15:53:35.702500 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=17.702469772 podStartE2EDuration="17.702469772s" podCreationTimestamp="2025-11-25 15:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:53:35.701612411 +0000 UTC m=+3574.448746851" watchObservedRunningTime="2025-11-25 15:53:35.702469772 +0000 UTC m=+3574.449604202" Nov 25 15:53:38 crc kubenswrapper[4731]: I1125 15:53:38.822933 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:48 crc kubenswrapper[4731]: I1125 15:53:48.830610 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:48 crc kubenswrapper[4731]: I1125 15:53:48.831510 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 25 15:53:48 crc kubenswrapper[4731]: I1125 15:53:48.842009 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 25 15:54:20 crc kubenswrapper[4731]: I1125 15:54:20.039985 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:54:20 crc kubenswrapper[4731]: I1125 15:54:20.040951 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:54:49 crc kubenswrapper[4731]: I1125 15:54:49.049926 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-500b-account-create-dczmz"] Nov 25 15:54:49 crc kubenswrapper[4731]: I1125 15:54:49.061165 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-fmqbg"] Nov 25 15:54:49 crc kubenswrapper[4731]: I1125 15:54:49.071044 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-500b-account-create-dczmz"] Nov 25 15:54:49 crc kubenswrapper[4731]: I1125 15:54:49.082531 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-fmqbg"] Nov 25 15:54:50 crc kubenswrapper[4731]: I1125 15:54:50.040474 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:54:50 crc kubenswrapper[4731]: I1125 15:54:50.040537 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:54:50 crc kubenswrapper[4731]: I1125 15:54:50.823241 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="330ce20e-583d-4d35-86e3-be01b80a00e4" path="/var/lib/kubelet/pods/330ce20e-583d-4d35-86e3-be01b80a00e4/volumes" Nov 25 15:54:50 crc kubenswrapper[4731]: I1125 15:54:50.824013 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed9de000-2e16-4a0d-9c13-3a7e206d6315" path="/var/lib/kubelet/pods/ed9de000-2e16-4a0d-9c13-3a7e206d6315/volumes" Nov 25 15:54:59 crc kubenswrapper[4731]: I1125 15:54:59.033510 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-mmdqv"] Nov 25 15:54:59 crc kubenswrapper[4731]: I1125 15:54:59.043724 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-mmdqv"] Nov 25 15:55:00 crc kubenswrapper[4731]: I1125 15:55:00.822641 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eb3c859-484c-41c6-b15f-9f565a6a9eb6" path="/var/lib/kubelet/pods/0eb3c859-484c-41c6-b15f-9f565a6a9eb6/volumes" Nov 25 15:55:16 crc kubenswrapper[4731]: I1125 15:55:16.082366 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-75cd8d4b6c-kkqgj_089e8703-f71e-4f81-91af-19224ba7b1b4/manager/0.log" Nov 25 15:55:16 crc kubenswrapper[4731]: I1125 15:55:16.780273 4731 scope.go:117] "RemoveContainer" containerID="08ef53bad59317d71001bc280a2ffbbae3ee012001a91fba50371676257bd4e5" Nov 25 15:55:16 crc kubenswrapper[4731]: I1125 15:55:16.817756 4731 scope.go:117] "RemoveContainer" containerID="ed9d82f220e8a9c365b10e90e1044b3aaf3afc8dc84b1eec0f659969d9f4eafd" Nov 25 15:55:16 crc kubenswrapper[4731]: I1125 15:55:16.880258 4731 scope.go:117] "RemoveContainer" containerID="1578519684ce643745dafa27f50236734165be45ab0c2fce01f964330980712f" Nov 25 15:55:17 crc kubenswrapper[4731]: I1125 15:55:17.513887 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 25 15:55:17 crc kubenswrapper[4731]: I1125 15:55:17.514226 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerName="aodh-api" containerID="cri-o://86f1b0a2a52bc4d05ce8efc9353e601c84cf463e51db9db5cc4c14d61efc05ef" gracePeriod=30 Nov 25 15:55:17 crc kubenswrapper[4731]: I1125 15:55:17.514270 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerName="aodh-listener" containerID="cri-o://5d0a5ba8f6d192a0842b3e0c14b996cf2d6b45b178b4989e094054d81f794a90" gracePeriod=30 Nov 25 15:55:17 crc kubenswrapper[4731]: I1125 15:55:17.514369 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerName="aodh-notifier" containerID="cri-o://b67ddff078d4110d2b0aba391adf12a56b816a836ce1bbc1a38ea1e45b936337" gracePeriod=30 Nov 25 15:55:17 crc kubenswrapper[4731]: I1125 15:55:17.514424 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerName="aodh-evaluator" containerID="cri-o://078e3485825d039f586adcc3aadb737647de1e32f77c6ba31b1b91a2d2bfda62" gracePeriod=30 Nov 25 15:55:18 crc kubenswrapper[4731]: I1125 15:55:18.738933 4731 generic.go:334] "Generic (PLEG): container finished" podID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerID="078e3485825d039f586adcc3aadb737647de1e32f77c6ba31b1b91a2d2bfda62" exitCode=0 Nov 25 15:55:18 crc kubenswrapper[4731]: I1125 15:55:18.739239 4731 generic.go:334] "Generic (PLEG): container finished" podID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerID="86f1b0a2a52bc4d05ce8efc9353e601c84cf463e51db9db5cc4c14d61efc05ef" exitCode=0 Nov 25 15:55:18 crc kubenswrapper[4731]: I1125 15:55:18.738973 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2705ac1d-821f-47a8-b8a9-960ed06e9280","Type":"ContainerDied","Data":"078e3485825d039f586adcc3aadb737647de1e32f77c6ba31b1b91a2d2bfda62"} Nov 25 15:55:18 crc kubenswrapper[4731]: I1125 15:55:18.739276 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2705ac1d-821f-47a8-b8a9-960ed06e9280","Type":"ContainerDied","Data":"86f1b0a2a52bc4d05ce8efc9353e601c84cf463e51db9db5cc4c14d61efc05ef"} Nov 25 15:55:20 crc kubenswrapper[4731]: I1125 15:55:20.040485 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 15:55:20 crc kubenswrapper[4731]: I1125 15:55:20.040844 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 15:55:20 crc kubenswrapper[4731]: I1125 15:55:20.040899 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 15:55:20 crc kubenswrapper[4731]: I1125 15:55:20.041668 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 15:55:20 crc kubenswrapper[4731]: I1125 15:55:20.041723 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" gracePeriod=600 Nov 25 15:55:20 crc kubenswrapper[4731]: E1125 15:55:20.227000 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:55:20 crc kubenswrapper[4731]: I1125 15:55:20.770504 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" exitCode=0 Nov 25 15:55:20 crc kubenswrapper[4731]: I1125 15:55:20.770581 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00"} Nov 25 15:55:20 crc kubenswrapper[4731]: I1125 15:55:20.770948 4731 scope.go:117] "RemoveContainer" containerID="b2f9e4670024f055588fd4f90acc9b51e64b4b08f50f884f4bf445bf1c3e24ba" Nov 25 15:55:20 crc kubenswrapper[4731]: I1125 15:55:20.771612 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:55:20 crc kubenswrapper[4731]: E1125 15:55:20.771979 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:55:20 crc kubenswrapper[4731]: I1125 15:55:20.776983 4731 generic.go:334] "Generic (PLEG): container finished" podID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerID="5d0a5ba8f6d192a0842b3e0c14b996cf2d6b45b178b4989e094054d81f794a90" exitCode=0 Nov 25 15:55:20 crc kubenswrapper[4731]: I1125 15:55:20.777020 4731 generic.go:334] "Generic (PLEG): container finished" podID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerID="b67ddff078d4110d2b0aba391adf12a56b816a836ce1bbc1a38ea1e45b936337" exitCode=0 Nov 25 15:55:20 crc kubenswrapper[4731]: I1125 15:55:20.777045 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2705ac1d-821f-47a8-b8a9-960ed06e9280","Type":"ContainerDied","Data":"5d0a5ba8f6d192a0842b3e0c14b996cf2d6b45b178b4989e094054d81f794a90"} Nov 25 15:55:20 crc kubenswrapper[4731]: I1125 15:55:20.777074 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2705ac1d-821f-47a8-b8a9-960ed06e9280","Type":"ContainerDied","Data":"b67ddff078d4110d2b0aba391adf12a56b816a836ce1bbc1a38ea1e45b936337"} Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.052341 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.205830 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjvcf\" (UniqueName: \"kubernetes.io/projected/2705ac1d-821f-47a8-b8a9-960ed06e9280-kube-api-access-vjvcf\") pod \"2705ac1d-821f-47a8-b8a9-960ed06e9280\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.205916 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-config-data\") pod \"2705ac1d-821f-47a8-b8a9-960ed06e9280\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.206063 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-combined-ca-bundle\") pod \"2705ac1d-821f-47a8-b8a9-960ed06e9280\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.206109 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-public-tls-certs\") pod \"2705ac1d-821f-47a8-b8a9-960ed06e9280\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.206234 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-scripts\") pod \"2705ac1d-821f-47a8-b8a9-960ed06e9280\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.206315 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-internal-tls-certs\") pod \"2705ac1d-821f-47a8-b8a9-960ed06e9280\" (UID: \"2705ac1d-821f-47a8-b8a9-960ed06e9280\") " Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.213847 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2705ac1d-821f-47a8-b8a9-960ed06e9280-kube-api-access-vjvcf" (OuterVolumeSpecName: "kube-api-access-vjvcf") pod "2705ac1d-821f-47a8-b8a9-960ed06e9280" (UID: "2705ac1d-821f-47a8-b8a9-960ed06e9280"). InnerVolumeSpecName "kube-api-access-vjvcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.226033 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-scripts" (OuterVolumeSpecName: "scripts") pod "2705ac1d-821f-47a8-b8a9-960ed06e9280" (UID: "2705ac1d-821f-47a8-b8a9-960ed06e9280"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.262639 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "2705ac1d-821f-47a8-b8a9-960ed06e9280" (UID: "2705ac1d-821f-47a8-b8a9-960ed06e9280"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.272844 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2705ac1d-821f-47a8-b8a9-960ed06e9280" (UID: "2705ac1d-821f-47a8-b8a9-960ed06e9280"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.310307 4731 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-public-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.310361 4731 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-scripts\") on node \"crc\" DevicePath \"\"" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.310379 4731 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.310395 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjvcf\" (UniqueName: \"kubernetes.io/projected/2705ac1d-821f-47a8-b8a9-960ed06e9280-kube-api-access-vjvcf\") on node \"crc\" DevicePath \"\"" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.328842 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2705ac1d-821f-47a8-b8a9-960ed06e9280" (UID: "2705ac1d-821f-47a8-b8a9-960ed06e9280"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.346127 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-config-data" (OuterVolumeSpecName: "config-data") pod "2705ac1d-821f-47a8-b8a9-960ed06e9280" (UID: "2705ac1d-821f-47a8-b8a9-960ed06e9280"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.412223 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.412253 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2705ac1d-821f-47a8-b8a9-960ed06e9280-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.789740 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"2705ac1d-821f-47a8-b8a9-960ed06e9280","Type":"ContainerDied","Data":"8630b9a9f93000f321ae861443692850063f20609997137abf8d5126c1d77c77"} Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.789809 4731 scope.go:117] "RemoveContainer" containerID="5d0a5ba8f6d192a0842b3e0c14b996cf2d6b45b178b4989e094054d81f794a90" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.789905 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.809102 4731 scope.go:117] "RemoveContainer" containerID="b67ddff078d4110d2b0aba391adf12a56b816a836ce1bbc1a38ea1e45b936337" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.843242 4731 scope.go:117] "RemoveContainer" containerID="078e3485825d039f586adcc3aadb737647de1e32f77c6ba31b1b91a2d2bfda62" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.847348 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.861982 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.873165 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Nov 25 15:55:21 crc kubenswrapper[4731]: E1125 15:55:21.873589 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerName="aodh-api" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.873609 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerName="aodh-api" Nov 25 15:55:21 crc kubenswrapper[4731]: E1125 15:55:21.873630 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerName="aodh-listener" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.873636 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerName="aodh-listener" Nov 25 15:55:21 crc kubenswrapper[4731]: E1125 15:55:21.873658 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerName="aodh-evaluator" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.873663 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerName="aodh-evaluator" Nov 25 15:55:21 crc kubenswrapper[4731]: E1125 15:55:21.873675 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerName="aodh-notifier" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.873681 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerName="aodh-notifier" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.873864 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerName="aodh-notifier" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.873879 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerName="aodh-evaluator" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.873891 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerName="aodh-api" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.873911 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" containerName="aodh-listener" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.876271 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.883798 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.885936 4731 scope.go:117] "RemoveContainer" containerID="86f1b0a2a52bc4d05ce8efc9353e601c84cf463e51db9db5cc4c14d61efc05ef" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.893307 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.893637 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.893486 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.894204 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-h9h4w" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.896741 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.934454 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-public-tls-certs\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.934533 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-internal-tls-certs\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.934693 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-config-data\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.934736 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.934861 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkp9r\" (UniqueName: \"kubernetes.io/projected/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-kube-api-access-bkp9r\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:21 crc kubenswrapper[4731]: I1125 15:55:21.934950 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-scripts\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.037265 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-internal-tls-certs\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.037428 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-config-data\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.037465 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.037544 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkp9r\" (UniqueName: \"kubernetes.io/projected/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-kube-api-access-bkp9r\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.037628 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-scripts\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.037676 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-public-tls-certs\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.042630 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-scripts\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.043432 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-config-data\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.043457 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-public-tls-certs\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.043859 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-internal-tls-certs\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.051636 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-combined-ca-bundle\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.055456 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkp9r\" (UniqueName: \"kubernetes.io/projected/e8bbb605-448a-4d7a-a7b1-55d3cacec4f7-kube-api-access-bkp9r\") pod \"aodh-0\" (UID: \"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7\") " pod="openstack/aodh-0" Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.219554 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.788916 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.791245 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.801172 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7","Type":"ContainerStarted","Data":"49522aeff76368531a66a402acf9221795ed0e7b5b17d707ab05d7a3ce3ee54e"} Nov 25 15:55:22 crc kubenswrapper[4731]: I1125 15:55:22.833738 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2705ac1d-821f-47a8-b8a9-960ed06e9280" path="/var/lib/kubelet/pods/2705ac1d-821f-47a8-b8a9-960ed06e9280/volumes" Nov 25 15:55:24 crc kubenswrapper[4731]: I1125 15:55:24.824093 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7","Type":"ContainerStarted","Data":"a0beff4e213f0125d544fc1808ebaa3b971c6072066c07cd283bbd52e1a0e03b"} Nov 25 15:55:26 crc kubenswrapper[4731]: I1125 15:55:26.847596 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7","Type":"ContainerStarted","Data":"e2827e5474d60a4ce17c378be6a66819d80cb0274273a08b00e8c9a2a581c122"} Nov 25 15:55:27 crc kubenswrapper[4731]: I1125 15:55:27.862408 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7","Type":"ContainerStarted","Data":"8ce3dee5be1d257539a8f5792b3ee5b08481c239a72f6defd6446c9ecf1455e1"} Nov 25 15:55:29 crc kubenswrapper[4731]: I1125 15:55:29.887646 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"e8bbb605-448a-4d7a-a7b1-55d3cacec4f7","Type":"ContainerStarted","Data":"f913442220246bba94bcda68cb01939cd952bfaecff20ceb4660ff720b92a71d"} Nov 25 15:55:29 crc kubenswrapper[4731]: I1125 15:55:29.916509 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.856214608 podStartE2EDuration="8.916478782s" podCreationTimestamp="2025-11-25 15:55:21 +0000 UTC" firstStartedPulling="2025-11-25 15:55:22.790969583 +0000 UTC m=+3681.538104013" lastFinishedPulling="2025-11-25 15:55:28.851233737 +0000 UTC m=+3687.598368187" observedRunningTime="2025-11-25 15:55:29.912634626 +0000 UTC m=+3688.659769086" watchObservedRunningTime="2025-11-25 15:55:29.916478782 +0000 UTC m=+3688.663613252" Nov 25 15:55:33 crc kubenswrapper[4731]: I1125 15:55:33.812164 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:55:33 crc kubenswrapper[4731]: E1125 15:55:33.812951 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:55:46 crc kubenswrapper[4731]: I1125 15:55:46.811780 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:55:46 crc kubenswrapper[4731]: E1125 15:55:46.812641 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:55:58 crc kubenswrapper[4731]: I1125 15:55:58.811290 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:55:58 crc kubenswrapper[4731]: E1125 15:55:58.812132 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:56:11 crc kubenswrapper[4731]: I1125 15:56:11.811534 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:56:11 crc kubenswrapper[4731]: E1125 15:56:11.812360 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:56:16 crc kubenswrapper[4731]: I1125 15:56:16.972125 4731 scope.go:117] "RemoveContainer" containerID="4b62654d8a1456ff154280cceb518c57721d537a81d6f32ba69ffa9d9a36600b" Nov 25 15:56:17 crc kubenswrapper[4731]: I1125 15:56:17.000332 4731 scope.go:117] "RemoveContainer" containerID="cd90116793f6c00f619eb14840467d104a59fdfa44339b80b899dd1b4ff0ce5e" Nov 25 15:56:17 crc kubenswrapper[4731]: I1125 15:56:17.020383 4731 scope.go:117] "RemoveContainer" containerID="822474fa84985c19afe3de131a6e0b0e979dd694de3ab871fca7641a86181a85" Nov 25 15:56:17 crc kubenswrapper[4731]: I1125 15:56:17.052677 4731 scope.go:117] "RemoveContainer" containerID="f16595105bcde2838d43a40ead398941bfb4cb3546eb530e6906028b4f484717" Nov 25 15:56:22 crc kubenswrapper[4731]: I1125 15:56:22.823431 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:56:22 crc kubenswrapper[4731]: E1125 15:56:22.824453 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:56:34 crc kubenswrapper[4731]: I1125 15:56:34.812513 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:56:34 crc kubenswrapper[4731]: E1125 15:56:34.813430 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:56:40 crc kubenswrapper[4731]: I1125 15:56:40.098695 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5pcwz"] Nov 25 15:56:40 crc kubenswrapper[4731]: I1125 15:56:40.106250 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:40 crc kubenswrapper[4731]: I1125 15:56:40.123781 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pcwz"] Nov 25 15:56:40 crc kubenswrapper[4731]: I1125 15:56:40.224148 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481920b6-d9cd-4f94-94c6-610a80f72a9f-utilities\") pod \"redhat-marketplace-5pcwz\" (UID: \"481920b6-d9cd-4f94-94c6-610a80f72a9f\") " pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:40 crc kubenswrapper[4731]: I1125 15:56:40.224579 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481920b6-d9cd-4f94-94c6-610a80f72a9f-catalog-content\") pod \"redhat-marketplace-5pcwz\" (UID: \"481920b6-d9cd-4f94-94c6-610a80f72a9f\") " pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:40 crc kubenswrapper[4731]: I1125 15:56:40.224740 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jx79\" (UniqueName: \"kubernetes.io/projected/481920b6-d9cd-4f94-94c6-610a80f72a9f-kube-api-access-7jx79\") pod \"redhat-marketplace-5pcwz\" (UID: \"481920b6-d9cd-4f94-94c6-610a80f72a9f\") " pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:40 crc kubenswrapper[4731]: I1125 15:56:40.326901 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jx79\" (UniqueName: \"kubernetes.io/projected/481920b6-d9cd-4f94-94c6-610a80f72a9f-kube-api-access-7jx79\") pod \"redhat-marketplace-5pcwz\" (UID: \"481920b6-d9cd-4f94-94c6-610a80f72a9f\") " pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:40 crc kubenswrapper[4731]: I1125 15:56:40.327268 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481920b6-d9cd-4f94-94c6-610a80f72a9f-utilities\") pod \"redhat-marketplace-5pcwz\" (UID: \"481920b6-d9cd-4f94-94c6-610a80f72a9f\") " pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:40 crc kubenswrapper[4731]: I1125 15:56:40.327540 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481920b6-d9cd-4f94-94c6-610a80f72a9f-catalog-content\") pod \"redhat-marketplace-5pcwz\" (UID: \"481920b6-d9cd-4f94-94c6-610a80f72a9f\") " pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:40 crc kubenswrapper[4731]: I1125 15:56:40.327884 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481920b6-d9cd-4f94-94c6-610a80f72a9f-utilities\") pod \"redhat-marketplace-5pcwz\" (UID: \"481920b6-d9cd-4f94-94c6-610a80f72a9f\") " pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:40 crc kubenswrapper[4731]: I1125 15:56:40.328114 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481920b6-d9cd-4f94-94c6-610a80f72a9f-catalog-content\") pod \"redhat-marketplace-5pcwz\" (UID: \"481920b6-d9cd-4f94-94c6-610a80f72a9f\") " pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:40 crc kubenswrapper[4731]: I1125 15:56:40.353365 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jx79\" (UniqueName: \"kubernetes.io/projected/481920b6-d9cd-4f94-94c6-610a80f72a9f-kube-api-access-7jx79\") pod \"redhat-marketplace-5pcwz\" (UID: \"481920b6-d9cd-4f94-94c6-610a80f72a9f\") " pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:40 crc kubenswrapper[4731]: I1125 15:56:40.429393 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:41 crc kubenswrapper[4731]: I1125 15:56:41.021938 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pcwz"] Nov 25 15:56:41 crc kubenswrapper[4731]: I1125 15:56:41.626552 4731 generic.go:334] "Generic (PLEG): container finished" podID="481920b6-d9cd-4f94-94c6-610a80f72a9f" containerID="1e271c0c5852d2159ef709b3810c7aaeec555ac07cd4c7fc07cb27bb866089e4" exitCode=0 Nov 25 15:56:41 crc kubenswrapper[4731]: I1125 15:56:41.626632 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pcwz" event={"ID":"481920b6-d9cd-4f94-94c6-610a80f72a9f","Type":"ContainerDied","Data":"1e271c0c5852d2159ef709b3810c7aaeec555ac07cd4c7fc07cb27bb866089e4"} Nov 25 15:56:41 crc kubenswrapper[4731]: I1125 15:56:41.626896 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pcwz" event={"ID":"481920b6-d9cd-4f94-94c6-610a80f72a9f","Type":"ContainerStarted","Data":"4dc3abe5b0f07e5ae1c6c116ecb78f1d4d120375cd7d7e8da3a1fc4603f61841"} Nov 25 15:56:43 crc kubenswrapper[4731]: I1125 15:56:43.650670 4731 generic.go:334] "Generic (PLEG): container finished" podID="481920b6-d9cd-4f94-94c6-610a80f72a9f" containerID="cf47565edb6104c944dea77c8e49f91f1ca1a6c7895889a4b039e71ab16a1cbe" exitCode=0 Nov 25 15:56:43 crc kubenswrapper[4731]: I1125 15:56:43.651025 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pcwz" event={"ID":"481920b6-d9cd-4f94-94c6-610a80f72a9f","Type":"ContainerDied","Data":"cf47565edb6104c944dea77c8e49f91f1ca1a6c7895889a4b039e71ab16a1cbe"} Nov 25 15:56:44 crc kubenswrapper[4731]: I1125 15:56:44.692431 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pcwz" event={"ID":"481920b6-d9cd-4f94-94c6-610a80f72a9f","Type":"ContainerStarted","Data":"57f6f5f0e9c50443fc1623de8014bae6c2d83e00327312fb5d55adce81940628"} Nov 25 15:56:44 crc kubenswrapper[4731]: I1125 15:56:44.716288 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5pcwz" podStartSLOduration=2.240289484 podStartE2EDuration="4.716267116s" podCreationTimestamp="2025-11-25 15:56:40 +0000 UTC" firstStartedPulling="2025-11-25 15:56:41.628654332 +0000 UTC m=+3760.375788762" lastFinishedPulling="2025-11-25 15:56:44.104631964 +0000 UTC m=+3762.851766394" observedRunningTime="2025-11-25 15:56:44.710116292 +0000 UTC m=+3763.457250742" watchObservedRunningTime="2025-11-25 15:56:44.716267116 +0000 UTC m=+3763.463401546" Nov 25 15:56:48 crc kubenswrapper[4731]: I1125 15:56:48.811586 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:56:48 crc kubenswrapper[4731]: E1125 15:56:48.812172 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:56:50 crc kubenswrapper[4731]: I1125 15:56:50.430133 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:50 crc kubenswrapper[4731]: I1125 15:56:50.430618 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:50 crc kubenswrapper[4731]: I1125 15:56:50.504908 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:50 crc kubenswrapper[4731]: I1125 15:56:50.843564 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:50 crc kubenswrapper[4731]: I1125 15:56:50.891707 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pcwz"] Nov 25 15:56:52 crc kubenswrapper[4731]: I1125 15:56:52.764977 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5pcwz" podUID="481920b6-d9cd-4f94-94c6-610a80f72a9f" containerName="registry-server" containerID="cri-o://57f6f5f0e9c50443fc1623de8014bae6c2d83e00327312fb5d55adce81940628" gracePeriod=2 Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.750881 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.816317 4731 generic.go:334] "Generic (PLEG): container finished" podID="481920b6-d9cd-4f94-94c6-610a80f72a9f" containerID="57f6f5f0e9c50443fc1623de8014bae6c2d83e00327312fb5d55adce81940628" exitCode=0 Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.816374 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pcwz" event={"ID":"481920b6-d9cd-4f94-94c6-610a80f72a9f","Type":"ContainerDied","Data":"57f6f5f0e9c50443fc1623de8014bae6c2d83e00327312fb5d55adce81940628"} Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.816385 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5pcwz" Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.816404 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5pcwz" event={"ID":"481920b6-d9cd-4f94-94c6-610a80f72a9f","Type":"ContainerDied","Data":"4dc3abe5b0f07e5ae1c6c116ecb78f1d4d120375cd7d7e8da3a1fc4603f61841"} Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.816422 4731 scope.go:117] "RemoveContainer" containerID="57f6f5f0e9c50443fc1623de8014bae6c2d83e00327312fb5d55adce81940628" Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.841329 4731 scope.go:117] "RemoveContainer" containerID="cf47565edb6104c944dea77c8e49f91f1ca1a6c7895889a4b039e71ab16a1cbe" Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.867891 4731 scope.go:117] "RemoveContainer" containerID="1e271c0c5852d2159ef709b3810c7aaeec555ac07cd4c7fc07cb27bb866089e4" Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.903176 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481920b6-d9cd-4f94-94c6-610a80f72a9f-utilities\") pod \"481920b6-d9cd-4f94-94c6-610a80f72a9f\" (UID: \"481920b6-d9cd-4f94-94c6-610a80f72a9f\") " Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.903353 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jx79\" (UniqueName: \"kubernetes.io/projected/481920b6-d9cd-4f94-94c6-610a80f72a9f-kube-api-access-7jx79\") pod \"481920b6-d9cd-4f94-94c6-610a80f72a9f\" (UID: \"481920b6-d9cd-4f94-94c6-610a80f72a9f\") " Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.903419 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481920b6-d9cd-4f94-94c6-610a80f72a9f-catalog-content\") pod \"481920b6-d9cd-4f94-94c6-610a80f72a9f\" (UID: \"481920b6-d9cd-4f94-94c6-610a80f72a9f\") " Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.906778 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481920b6-d9cd-4f94-94c6-610a80f72a9f-utilities" (OuterVolumeSpecName: "utilities") pod "481920b6-d9cd-4f94-94c6-610a80f72a9f" (UID: "481920b6-d9cd-4f94-94c6-610a80f72a9f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.908843 4731 scope.go:117] "RemoveContainer" containerID="57f6f5f0e9c50443fc1623de8014bae6c2d83e00327312fb5d55adce81940628" Nov 25 15:56:53 crc kubenswrapper[4731]: E1125 15:56:53.911421 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57f6f5f0e9c50443fc1623de8014bae6c2d83e00327312fb5d55adce81940628\": container with ID starting with 57f6f5f0e9c50443fc1623de8014bae6c2d83e00327312fb5d55adce81940628 not found: ID does not exist" containerID="57f6f5f0e9c50443fc1623de8014bae6c2d83e00327312fb5d55adce81940628" Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.911460 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57f6f5f0e9c50443fc1623de8014bae6c2d83e00327312fb5d55adce81940628"} err="failed to get container status \"57f6f5f0e9c50443fc1623de8014bae6c2d83e00327312fb5d55adce81940628\": rpc error: code = NotFound desc = could not find container \"57f6f5f0e9c50443fc1623de8014bae6c2d83e00327312fb5d55adce81940628\": container with ID starting with 57f6f5f0e9c50443fc1623de8014bae6c2d83e00327312fb5d55adce81940628 not found: ID does not exist" Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.911486 4731 scope.go:117] "RemoveContainer" containerID="cf47565edb6104c944dea77c8e49f91f1ca1a6c7895889a4b039e71ab16a1cbe" Nov 25 15:56:53 crc kubenswrapper[4731]: E1125 15:56:53.911919 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cf47565edb6104c944dea77c8e49f91f1ca1a6c7895889a4b039e71ab16a1cbe\": container with ID starting with cf47565edb6104c944dea77c8e49f91f1ca1a6c7895889a4b039e71ab16a1cbe not found: ID does not exist" containerID="cf47565edb6104c944dea77c8e49f91f1ca1a6c7895889a4b039e71ab16a1cbe" Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.911942 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cf47565edb6104c944dea77c8e49f91f1ca1a6c7895889a4b039e71ab16a1cbe"} err="failed to get container status \"cf47565edb6104c944dea77c8e49f91f1ca1a6c7895889a4b039e71ab16a1cbe\": rpc error: code = NotFound desc = could not find container \"cf47565edb6104c944dea77c8e49f91f1ca1a6c7895889a4b039e71ab16a1cbe\": container with ID starting with cf47565edb6104c944dea77c8e49f91f1ca1a6c7895889a4b039e71ab16a1cbe not found: ID does not exist" Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.911956 4731 scope.go:117] "RemoveContainer" containerID="1e271c0c5852d2159ef709b3810c7aaeec555ac07cd4c7fc07cb27bb866089e4" Nov 25 15:56:53 crc kubenswrapper[4731]: E1125 15:56:53.912246 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e271c0c5852d2159ef709b3810c7aaeec555ac07cd4c7fc07cb27bb866089e4\": container with ID starting with 1e271c0c5852d2159ef709b3810c7aaeec555ac07cd4c7fc07cb27bb866089e4 not found: ID does not exist" containerID="1e271c0c5852d2159ef709b3810c7aaeec555ac07cd4c7fc07cb27bb866089e4" Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.912290 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e271c0c5852d2159ef709b3810c7aaeec555ac07cd4c7fc07cb27bb866089e4"} err="failed to get container status \"1e271c0c5852d2159ef709b3810c7aaeec555ac07cd4c7fc07cb27bb866089e4\": rpc error: code = NotFound desc = could not find container \"1e271c0c5852d2159ef709b3810c7aaeec555ac07cd4c7fc07cb27bb866089e4\": container with ID starting with 1e271c0c5852d2159ef709b3810c7aaeec555ac07cd4c7fc07cb27bb866089e4 not found: ID does not exist" Nov 25 15:56:53 crc kubenswrapper[4731]: I1125 15:56:53.913143 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/481920b6-d9cd-4f94-94c6-610a80f72a9f-kube-api-access-7jx79" (OuterVolumeSpecName: "kube-api-access-7jx79") pod "481920b6-d9cd-4f94-94c6-610a80f72a9f" (UID: "481920b6-d9cd-4f94-94c6-610a80f72a9f"). InnerVolumeSpecName "kube-api-access-7jx79". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:56:54 crc kubenswrapper[4731]: I1125 15:56:54.006502 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jx79\" (UniqueName: \"kubernetes.io/projected/481920b6-d9cd-4f94-94c6-610a80f72a9f-kube-api-access-7jx79\") on node \"crc\" DevicePath \"\"" Nov 25 15:56:54 crc kubenswrapper[4731]: I1125 15:56:54.006549 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/481920b6-d9cd-4f94-94c6-610a80f72a9f-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 15:56:54 crc kubenswrapper[4731]: I1125 15:56:54.622047 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/481920b6-d9cd-4f94-94c6-610a80f72a9f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "481920b6-d9cd-4f94-94c6-610a80f72a9f" (UID: "481920b6-d9cd-4f94-94c6-610a80f72a9f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:56:54 crc kubenswrapper[4731]: I1125 15:56:54.722546 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/481920b6-d9cd-4f94-94c6-610a80f72a9f-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 15:56:54 crc kubenswrapper[4731]: I1125 15:56:54.756863 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pcwz"] Nov 25 15:56:54 crc kubenswrapper[4731]: I1125 15:56:54.768265 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5pcwz"] Nov 25 15:56:54 crc kubenswrapper[4731]: I1125 15:56:54.824005 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="481920b6-d9cd-4f94-94c6-610a80f72a9f" path="/var/lib/kubelet/pods/481920b6-d9cd-4f94-94c6-610a80f72a9f/volumes" Nov 25 15:56:59 crc kubenswrapper[4731]: I1125 15:56:59.811940 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:56:59 crc kubenswrapper[4731]: E1125 15:56:59.812861 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:57:14 crc kubenswrapper[4731]: I1125 15:57:14.811941 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:57:14 crc kubenswrapper[4731]: E1125 15:57:14.813609 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:57:18 crc kubenswrapper[4731]: I1125 15:57:18.143426 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-75cd8d4b6c-kkqgj_089e8703-f71e-4f81-91af-19224ba7b1b4/manager/0.log" Nov 25 15:57:22 crc kubenswrapper[4731]: I1125 15:57:22.824608 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:57:22 crc kubenswrapper[4731]: I1125 15:57:22.825542 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerName="prometheus" containerID="cri-o://f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a" gracePeriod=600 Nov 25 15:57:22 crc kubenswrapper[4731]: I1125 15:57:22.825625 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerName="thanos-sidecar" containerID="cri-o://c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a" gracePeriod=600 Nov 25 15:57:22 crc kubenswrapper[4731]: I1125 15:57:22.825666 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerName="config-reloader" containerID="cri-o://e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9" gracePeriod=600 Nov 25 15:57:23 crc kubenswrapper[4731]: I1125 15:57:23.124326 4731 generic.go:334] "Generic (PLEG): container finished" podID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerID="c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a" exitCode=0 Nov 25 15:57:23 crc kubenswrapper[4731]: I1125 15:57:23.124371 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d943fa9-bb0e-4385-a412-8e982cd59e96","Type":"ContainerDied","Data":"c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a"} Nov 25 15:57:23 crc kubenswrapper[4731]: I1125 15:57:23.950016 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.046666 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9d943fa9-bb0e-4385-a412-8e982cd59e96-prometheus-metric-storage-db\") pod \"9d943fa9-bb0e-4385-a412-8e982cd59e96\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.046721 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d943fa9-bb0e-4385-a412-8e982cd59e96-prometheus-metric-storage-rulefiles-0\") pod \"9d943fa9-bb0e-4385-a412-8e982cd59e96\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.046751 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"9d943fa9-bb0e-4385-a412-8e982cd59e96\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.046954 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-secret-combined-ca-bundle\") pod \"9d943fa9-bb0e-4385-a412-8e982cd59e96\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.046980 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"9d943fa9-bb0e-4385-a412-8e982cd59e96\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.047005 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-thanos-prometheus-http-client-file\") pod \"9d943fa9-bb0e-4385-a412-8e982cd59e96\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.047031 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d943fa9-bb0e-4385-a412-8e982cd59e96-config-out\") pod \"9d943fa9-bb0e-4385-a412-8e982cd59e96\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.047079 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d943fa9-bb0e-4385-a412-8e982cd59e96-tls-assets\") pod \"9d943fa9-bb0e-4385-a412-8e982cd59e96\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.047147 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tttqb\" (UniqueName: \"kubernetes.io/projected/9d943fa9-bb0e-4385-a412-8e982cd59e96-kube-api-access-tttqb\") pod \"9d943fa9-bb0e-4385-a412-8e982cd59e96\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.047210 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config\") pod \"9d943fa9-bb0e-4385-a412-8e982cd59e96\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.047257 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-config\") pod \"9d943fa9-bb0e-4385-a412-8e982cd59e96\" (UID: \"9d943fa9-bb0e-4385-a412-8e982cd59e96\") " Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.048479 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d943fa9-bb0e-4385-a412-8e982cd59e96-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "9d943fa9-bb0e-4385-a412-8e982cd59e96" (UID: "9d943fa9-bb0e-4385-a412-8e982cd59e96"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.049021 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d943fa9-bb0e-4385-a412-8e982cd59e96-prometheus-metric-storage-db" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "9d943fa9-bb0e-4385-a412-8e982cd59e96" (UID: "9d943fa9-bb0e-4385-a412-8e982cd59e96"). InnerVolumeSpecName "prometheus-metric-storage-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.053222 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "9d943fa9-bb0e-4385-a412-8e982cd59e96" (UID: "9d943fa9-bb0e-4385-a412-8e982cd59e96"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.053516 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d943fa9-bb0e-4385-a412-8e982cd59e96-config-out" (OuterVolumeSpecName: "config-out") pod "9d943fa9-bb0e-4385-a412-8e982cd59e96" (UID: "9d943fa9-bb0e-4385-a412-8e982cd59e96"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.054343 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "9d943fa9-bb0e-4385-a412-8e982cd59e96" (UID: "9d943fa9-bb0e-4385-a412-8e982cd59e96"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.054680 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-config" (OuterVolumeSpecName: "config") pod "9d943fa9-bb0e-4385-a412-8e982cd59e96" (UID: "9d943fa9-bb0e-4385-a412-8e982cd59e96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.055042 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9d943fa9-bb0e-4385-a412-8e982cd59e96" (UID: "9d943fa9-bb0e-4385-a412-8e982cd59e96"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.055255 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "9d943fa9-bb0e-4385-a412-8e982cd59e96" (UID: "9d943fa9-bb0e-4385-a412-8e982cd59e96"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.057022 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d943fa9-bb0e-4385-a412-8e982cd59e96-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9d943fa9-bb0e-4385-a412-8e982cd59e96" (UID: "9d943fa9-bb0e-4385-a412-8e982cd59e96"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.066067 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d943fa9-bb0e-4385-a412-8e982cd59e96-kube-api-access-tttqb" (OuterVolumeSpecName: "kube-api-access-tttqb") pod "9d943fa9-bb0e-4385-a412-8e982cd59e96" (UID: "9d943fa9-bb0e-4385-a412-8e982cd59e96"). InnerVolumeSpecName "kube-api-access-tttqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.138484 4731 generic.go:334] "Generic (PLEG): container finished" podID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerID="e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9" exitCode=0 Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.138974 4731 generic.go:334] "Generic (PLEG): container finished" podID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerID="f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a" exitCode=0 Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.138570 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d943fa9-bb0e-4385-a412-8e982cd59e96","Type":"ContainerDied","Data":"e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9"} Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.138573 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.139197 4731 scope.go:117] "RemoveContainer" containerID="c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.139122 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d943fa9-bb0e-4385-a412-8e982cd59e96","Type":"ContainerDied","Data":"f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a"} Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.139290 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9d943fa9-bb0e-4385-a412-8e982cd59e96","Type":"ContainerDied","Data":"8c606fcdd89e6f84e02053adc10c16ad4c3b391466fe0d0b627d20c4c6c47d62"} Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.151667 4731 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.151707 4731 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.151727 4731 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.151742 4731 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9d943fa9-bb0e-4385-a412-8e982cd59e96-config-out\") on node \"crc\" DevicePath \"\"" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.151755 4731 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9d943fa9-bb0e-4385-a412-8e982cd59e96-tls-assets\") on node \"crc\" DevicePath \"\"" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.151770 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tttqb\" (UniqueName: \"kubernetes.io/projected/9d943fa9-bb0e-4385-a412-8e982cd59e96-kube-api-access-tttqb\") on node \"crc\" DevicePath \"\"" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.151790 4731 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.151799 4731 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/9d943fa9-bb0e-4385-a412-8e982cd59e96-prometheus-metric-storage-db\") on node \"crc\" DevicePath \"\"" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.151808 4731 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9d943fa9-bb0e-4385-a412-8e982cd59e96-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.151861 4731 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.201120 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config" (OuterVolumeSpecName: "web-config") pod "9d943fa9-bb0e-4385-a412-8e982cd59e96" (UID: "9d943fa9-bb0e-4385-a412-8e982cd59e96"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.254517 4731 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9d943fa9-bb0e-4385-a412-8e982cd59e96-web-config\") on node \"crc\" DevicePath \"\"" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.260957 4731 scope.go:117] "RemoveContainer" containerID="e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.287135 4731 scope.go:117] "RemoveContainer" containerID="f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.315158 4731 scope.go:117] "RemoveContainer" containerID="eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.348942 4731 scope.go:117] "RemoveContainer" containerID="c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a" Nov 25 15:57:24 crc kubenswrapper[4731]: E1125 15:57:24.349422 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a\": container with ID starting with c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a not found: ID does not exist" containerID="c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.349459 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a"} err="failed to get container status \"c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a\": rpc error: code = NotFound desc = could not find container \"c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a\": container with ID starting with c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a not found: ID does not exist" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.349486 4731 scope.go:117] "RemoveContainer" containerID="e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9" Nov 25 15:57:24 crc kubenswrapper[4731]: E1125 15:57:24.349714 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9\": container with ID starting with e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9 not found: ID does not exist" containerID="e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.349742 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9"} err="failed to get container status \"e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9\": rpc error: code = NotFound desc = could not find container \"e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9\": container with ID starting with e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9 not found: ID does not exist" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.349762 4731 scope.go:117] "RemoveContainer" containerID="f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a" Nov 25 15:57:24 crc kubenswrapper[4731]: E1125 15:57:24.350016 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a\": container with ID starting with f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a not found: ID does not exist" containerID="f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.350039 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a"} err="failed to get container status \"f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a\": rpc error: code = NotFound desc = could not find container \"f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a\": container with ID starting with f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a not found: ID does not exist" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.350056 4731 scope.go:117] "RemoveContainer" containerID="eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4" Nov 25 15:57:24 crc kubenswrapper[4731]: E1125 15:57:24.356419 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4\": container with ID starting with eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4 not found: ID does not exist" containerID="eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.356478 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4"} err="failed to get container status \"eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4\": rpc error: code = NotFound desc = could not find container \"eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4\": container with ID starting with eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4 not found: ID does not exist" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.356508 4731 scope.go:117] "RemoveContainer" containerID="c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.356866 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a"} err="failed to get container status \"c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a\": rpc error: code = NotFound desc = could not find container \"c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a\": container with ID starting with c3f602fd982e9606167286f8ed9fac7db91d9ff0cc22091b0ec13ac416c57c6a not found: ID does not exist" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.356909 4731 scope.go:117] "RemoveContainer" containerID="e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.357207 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9"} err="failed to get container status \"e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9\": rpc error: code = NotFound desc = could not find container \"e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9\": container with ID starting with e372711f0ee1e4b7d1698c9d63d8edfe0369788e3a9365f17b7e15ae3123fdf9 not found: ID does not exist" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.357230 4731 scope.go:117] "RemoveContainer" containerID="f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.357468 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a"} err="failed to get container status \"f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a\": rpc error: code = NotFound desc = could not find container \"f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a\": container with ID starting with f81b642857d371b970900868ea4f3596a60200f3b4b6e40befa1d34c6d12a78a not found: ID does not exist" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.357490 4731 scope.go:117] "RemoveContainer" containerID="eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.357954 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4"} err="failed to get container status \"eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4\": rpc error: code = NotFound desc = could not find container \"eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4\": container with ID starting with eb1dc1d81401fe76f75ff11c3ec4f595e6df0070448f5a86c61cb4d9e3b7d7e4 not found: ID does not exist" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.478170 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.490390 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.507706 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:57:24 crc kubenswrapper[4731]: E1125 15:57:24.508086 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerName="thanos-sidecar" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.508102 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerName="thanos-sidecar" Nov 25 15:57:24 crc kubenswrapper[4731]: E1125 15:57:24.508115 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerName="config-reloader" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.508121 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerName="config-reloader" Nov 25 15:57:24 crc kubenswrapper[4731]: E1125 15:57:24.508129 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481920b6-d9cd-4f94-94c6-610a80f72a9f" containerName="registry-server" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.508135 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="481920b6-d9cd-4f94-94c6-610a80f72a9f" containerName="registry-server" Nov 25 15:57:24 crc kubenswrapper[4731]: E1125 15:57:24.508147 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerName="init-config-reloader" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.508152 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerName="init-config-reloader" Nov 25 15:57:24 crc kubenswrapper[4731]: E1125 15:57:24.508174 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481920b6-d9cd-4f94-94c6-610a80f72a9f" containerName="extract-utilities" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.508179 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="481920b6-d9cd-4f94-94c6-610a80f72a9f" containerName="extract-utilities" Nov 25 15:57:24 crc kubenswrapper[4731]: E1125 15:57:24.508190 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerName="prometheus" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.508195 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerName="prometheus" Nov 25 15:57:24 crc kubenswrapper[4731]: E1125 15:57:24.508206 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="481920b6-d9cd-4f94-94c6-610a80f72a9f" containerName="extract-content" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.508211 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="481920b6-d9cd-4f94-94c6-610a80f72a9f" containerName="extract-content" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.508385 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="481920b6-d9cd-4f94-94c6-610a80f72a9f" containerName="registry-server" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.508396 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerName="config-reloader" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.508418 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerName="prometheus" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.508426 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerName="thanos-sidecar" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.511098 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.553711 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.553854 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.554439 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-sf6fp" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.554501 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.554704 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.554974 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.557593 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.576648 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.664190 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/053ea4bb-51e8-4900-b8ad-fae14168a43f-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.664479 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.664624 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.664776 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlpvr\" (UniqueName: \"kubernetes.io/projected/053ea4bb-51e8-4900-b8ad-fae14168a43f-kube-api-access-mlpvr\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.664904 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.665106 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/053ea4bb-51e8-4900-b8ad-fae14168a43f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.665332 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/053ea4bb-51e8-4900-b8ad-fae14168a43f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.665422 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.665546 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/053ea4bb-51e8-4900-b8ad-fae14168a43f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.665657 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.665710 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-config\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.767697 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/053ea4bb-51e8-4900-b8ad-fae14168a43f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.767749 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/053ea4bb-51e8-4900-b8ad-fae14168a43f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.767774 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.767834 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/053ea4bb-51e8-4900-b8ad-fae14168a43f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.767878 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.767896 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-config\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.767987 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/053ea4bb-51e8-4900-b8ad-fae14168a43f-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.768008 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.768045 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.768073 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlpvr\" (UniqueName: \"kubernetes.io/projected/053ea4bb-51e8-4900-b8ad-fae14168a43f-kube-api-access-mlpvr\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.768111 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.769198 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/053ea4bb-51e8-4900-b8ad-fae14168a43f-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.770219 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/053ea4bb-51e8-4900-b8ad-fae14168a43f-prometheus-metric-storage-db\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.774327 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/053ea4bb-51e8-4900-b8ad-fae14168a43f-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.774983 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.775012 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.774991 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.775370 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.776651 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/053ea4bb-51e8-4900-b8ad-fae14168a43f-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.777208 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-config\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.778676 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/053ea4bb-51e8-4900-b8ad-fae14168a43f-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.793664 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlpvr\" (UniqueName: \"kubernetes.io/projected/053ea4bb-51e8-4900-b8ad-fae14168a43f-kube-api-access-mlpvr\") pod \"prometheus-metric-storage-0\" (UID: \"053ea4bb-51e8-4900-b8ad-fae14168a43f\") " pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.822652 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d943fa9-bb0e-4385-a412-8e982cd59e96" path="/var/lib/kubelet/pods/9d943fa9-bb0e-4385-a412-8e982cd59e96/volumes" Nov 25 15:57:24 crc kubenswrapper[4731]: I1125 15:57:24.866588 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:25 crc kubenswrapper[4731]: I1125 15:57:25.837803 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Nov 25 15:57:26 crc kubenswrapper[4731]: I1125 15:57:26.164232 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"053ea4bb-51e8-4900-b8ad-fae14168a43f","Type":"ContainerStarted","Data":"228b1130f7fc0c3bab69b1958dd2ef969d442c306dacf41521dc6a0e52087d35"} Nov 25 15:57:26 crc kubenswrapper[4731]: I1125 15:57:26.812437 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:57:26 crc kubenswrapper[4731]: E1125 15:57:26.812804 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:57:26 crc kubenswrapper[4731]: I1125 15:57:26.816511 4731 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="9d943fa9-bb0e-4385-a412-8e982cd59e96" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.1.13:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Nov 25 15:57:30 crc kubenswrapper[4731]: I1125 15:57:30.218047 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"053ea4bb-51e8-4900-b8ad-fae14168a43f","Type":"ContainerStarted","Data":"09d74d02ecd78c4877d37cea588674e376f4f5c3aa02e135d03f1537b66c5e96"} Nov 25 15:57:38 crc kubenswrapper[4731]: I1125 15:57:38.296370 4731 generic.go:334] "Generic (PLEG): container finished" podID="053ea4bb-51e8-4900-b8ad-fae14168a43f" containerID="09d74d02ecd78c4877d37cea588674e376f4f5c3aa02e135d03f1537b66c5e96" exitCode=0 Nov 25 15:57:38 crc kubenswrapper[4731]: I1125 15:57:38.296464 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"053ea4bb-51e8-4900-b8ad-fae14168a43f","Type":"ContainerDied","Data":"09d74d02ecd78c4877d37cea588674e376f4f5c3aa02e135d03f1537b66c5e96"} Nov 25 15:57:39 crc kubenswrapper[4731]: I1125 15:57:39.312170 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"053ea4bb-51e8-4900-b8ad-fae14168a43f","Type":"ContainerStarted","Data":"340f362ffdffd71d9ad3413231fe6f2236f08ca9a122fbdfd9a96e78c37850ee"} Nov 25 15:57:40 crc kubenswrapper[4731]: I1125 15:57:40.811142 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:57:40 crc kubenswrapper[4731]: E1125 15:57:40.811677 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:57:44 crc kubenswrapper[4731]: I1125 15:57:44.360136 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"053ea4bb-51e8-4900-b8ad-fae14168a43f","Type":"ContainerStarted","Data":"d57a23b942772f87d40645cea4aef1fd500655cf1e4f992235fed4909a36d93f"} Nov 25 15:57:45 crc kubenswrapper[4731]: I1125 15:57:45.374914 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"053ea4bb-51e8-4900-b8ad-fae14168a43f","Type":"ContainerStarted","Data":"414ea594fe75d67c866eef737a66c6a742c44a0330dbac07b6f6050d8715240b"} Nov 25 15:57:45 crc kubenswrapper[4731]: I1125 15:57:45.405053 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=21.405037082 podStartE2EDuration="21.405037082s" podCreationTimestamp="2025-11-25 15:57:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 15:57:45.404493679 +0000 UTC m=+3824.151628109" watchObservedRunningTime="2025-11-25 15:57:45.405037082 +0000 UTC m=+3824.152171512" Nov 25 15:57:49 crc kubenswrapper[4731]: I1125 15:57:49.867726 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:52 crc kubenswrapper[4731]: I1125 15:57:52.819832 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:57:52 crc kubenswrapper[4731]: E1125 15:57:52.820395 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:57:54 crc kubenswrapper[4731]: I1125 15:57:54.867714 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:54 crc kubenswrapper[4731]: I1125 15:57:54.874195 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Nov 25 15:57:55 crc kubenswrapper[4731]: I1125 15:57:55.474700 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Nov 25 15:58:03 crc kubenswrapper[4731]: I1125 15:58:03.811556 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:58:03 crc kubenswrapper[4731]: E1125 15:58:03.812406 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:58:16 crc kubenswrapper[4731]: I1125 15:58:16.812508 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:58:16 crc kubenswrapper[4731]: E1125 15:58:16.814405 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:58:27 crc kubenswrapper[4731]: I1125 15:58:27.811840 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:58:27 crc kubenswrapper[4731]: E1125 15:58:27.812974 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:58:41 crc kubenswrapper[4731]: I1125 15:58:41.812046 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:58:41 crc kubenswrapper[4731]: E1125 15:58:41.812740 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:58:52 crc kubenswrapper[4731]: I1125 15:58:52.823434 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:58:52 crc kubenswrapper[4731]: E1125 15:58:52.827867 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:59:03 crc kubenswrapper[4731]: I1125 15:59:03.812014 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:59:03 crc kubenswrapper[4731]: E1125 15:59:03.812812 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:59:17 crc kubenswrapper[4731]: I1125 15:59:17.811364 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:59:17 crc kubenswrapper[4731]: E1125 15:59:17.812025 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:59:22 crc kubenswrapper[4731]: I1125 15:59:22.894423 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-75cd8d4b6c-kkqgj_089e8703-f71e-4f81-91af-19224ba7b1b4/manager/0.log" Nov 25 15:59:31 crc kubenswrapper[4731]: I1125 15:59:31.811655 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:59:31 crc kubenswrapper[4731]: E1125 15:59:31.812407 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:59:43 crc kubenswrapper[4731]: I1125 15:59:43.811779 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:59:43 crc kubenswrapper[4731]: E1125 15:59:43.812516 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:59:45 crc kubenswrapper[4731]: I1125 15:59:45.244842 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4dfrk/must-gather-98zk6"] Nov 25 15:59:45 crc kubenswrapper[4731]: I1125 15:59:45.254643 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4dfrk/must-gather-98zk6" Nov 25 15:59:45 crc kubenswrapper[4731]: I1125 15:59:45.257132 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4dfrk"/"kube-root-ca.crt" Nov 25 15:59:45 crc kubenswrapper[4731]: I1125 15:59:45.258685 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-4dfrk"/"openshift-service-ca.crt" Nov 25 15:59:45 crc kubenswrapper[4731]: I1125 15:59:45.394574 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4dfrk/must-gather-98zk6"] Nov 25 15:59:45 crc kubenswrapper[4731]: I1125 15:59:45.397053 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5hzt\" (UniqueName: \"kubernetes.io/projected/53260cd4-5c2d-4f41-a325-3d1fff95df83-kube-api-access-w5hzt\") pod \"must-gather-98zk6\" (UID: \"53260cd4-5c2d-4f41-a325-3d1fff95df83\") " pod="openshift-must-gather-4dfrk/must-gather-98zk6" Nov 25 15:59:45 crc kubenswrapper[4731]: I1125 15:59:45.409514 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/53260cd4-5c2d-4f41-a325-3d1fff95df83-must-gather-output\") pod \"must-gather-98zk6\" (UID: \"53260cd4-5c2d-4f41-a325-3d1fff95df83\") " pod="openshift-must-gather-4dfrk/must-gather-98zk6" Nov 25 15:59:45 crc kubenswrapper[4731]: I1125 15:59:45.511410 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5hzt\" (UniqueName: \"kubernetes.io/projected/53260cd4-5c2d-4f41-a325-3d1fff95df83-kube-api-access-w5hzt\") pod \"must-gather-98zk6\" (UID: \"53260cd4-5c2d-4f41-a325-3d1fff95df83\") " pod="openshift-must-gather-4dfrk/must-gather-98zk6" Nov 25 15:59:45 crc kubenswrapper[4731]: I1125 15:59:45.511934 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/53260cd4-5c2d-4f41-a325-3d1fff95df83-must-gather-output\") pod \"must-gather-98zk6\" (UID: \"53260cd4-5c2d-4f41-a325-3d1fff95df83\") " pod="openshift-must-gather-4dfrk/must-gather-98zk6" Nov 25 15:59:45 crc kubenswrapper[4731]: I1125 15:59:45.512387 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/53260cd4-5c2d-4f41-a325-3d1fff95df83-must-gather-output\") pod \"must-gather-98zk6\" (UID: \"53260cd4-5c2d-4f41-a325-3d1fff95df83\") " pod="openshift-must-gather-4dfrk/must-gather-98zk6" Nov 25 15:59:45 crc kubenswrapper[4731]: I1125 15:59:45.572165 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5hzt\" (UniqueName: \"kubernetes.io/projected/53260cd4-5c2d-4f41-a325-3d1fff95df83-kube-api-access-w5hzt\") pod \"must-gather-98zk6\" (UID: \"53260cd4-5c2d-4f41-a325-3d1fff95df83\") " pod="openshift-must-gather-4dfrk/must-gather-98zk6" Nov 25 15:59:45 crc kubenswrapper[4731]: I1125 15:59:45.583441 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4dfrk/must-gather-98zk6" Nov 25 15:59:46 crc kubenswrapper[4731]: I1125 15:59:46.111356 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-4dfrk/must-gather-98zk6"] Nov 25 15:59:46 crc kubenswrapper[4731]: I1125 15:59:46.595610 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4dfrk/must-gather-98zk6" event={"ID":"53260cd4-5c2d-4f41-a325-3d1fff95df83","Type":"ContainerStarted","Data":"03b4a4580344be0abc0de5db3d140c70af8e1dc716991657fd6a243572dba5a6"} Nov 25 15:59:50 crc kubenswrapper[4731]: I1125 15:59:50.634994 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4dfrk/must-gather-98zk6" event={"ID":"53260cd4-5c2d-4f41-a325-3d1fff95df83","Type":"ContainerStarted","Data":"73b0533e32d42e77a5647e0e0a7c5735221bb997b08abb9f9b7ccef4e1c6e67e"} Nov 25 15:59:51 crc kubenswrapper[4731]: I1125 15:59:51.646871 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4dfrk/must-gather-98zk6" event={"ID":"53260cd4-5c2d-4f41-a325-3d1fff95df83","Type":"ContainerStarted","Data":"c4e81f8092148cef51eaaca27a24db0bf2fb7a9c32b7150eb1c57a79387b5bd4"} Nov 25 15:59:51 crc kubenswrapper[4731]: I1125 15:59:51.673775 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4dfrk/must-gather-98zk6" podStartSLOduration=2.584037488 podStartE2EDuration="6.673752019s" podCreationTimestamp="2025-11-25 15:59:45 +0000 UTC" firstStartedPulling="2025-11-25 15:59:46.127686751 +0000 UTC m=+3944.874821181" lastFinishedPulling="2025-11-25 15:59:50.217401282 +0000 UTC m=+3948.964535712" observedRunningTime="2025-11-25 15:59:51.665746219 +0000 UTC m=+3950.412880659" watchObservedRunningTime="2025-11-25 15:59:51.673752019 +0000 UTC m=+3950.420886449" Nov 25 15:59:56 crc kubenswrapper[4731]: I1125 15:59:56.811560 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 15:59:56 crc kubenswrapper[4731]: E1125 15:59:56.812432 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 15:59:57 crc kubenswrapper[4731]: I1125 15:59:57.590934 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4dfrk/crc-debug-7vtt2"] Nov 25 15:59:57 crc kubenswrapper[4731]: I1125 15:59:57.592647 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4dfrk/crc-debug-7vtt2" Nov 25 15:59:57 crc kubenswrapper[4731]: I1125 15:59:57.594395 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4dfrk"/"default-dockercfg-59kj9" Nov 25 15:59:57 crc kubenswrapper[4731]: I1125 15:59:57.648332 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29p94\" (UniqueName: \"kubernetes.io/projected/58284bfe-441b-4667-be20-60867dd3a815-kube-api-access-29p94\") pod \"crc-debug-7vtt2\" (UID: \"58284bfe-441b-4667-be20-60867dd3a815\") " pod="openshift-must-gather-4dfrk/crc-debug-7vtt2" Nov 25 15:59:57 crc kubenswrapper[4731]: I1125 15:59:57.648762 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58284bfe-441b-4667-be20-60867dd3a815-host\") pod \"crc-debug-7vtt2\" (UID: \"58284bfe-441b-4667-be20-60867dd3a815\") " pod="openshift-must-gather-4dfrk/crc-debug-7vtt2" Nov 25 15:59:57 crc kubenswrapper[4731]: I1125 15:59:57.750783 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58284bfe-441b-4667-be20-60867dd3a815-host\") pod \"crc-debug-7vtt2\" (UID: \"58284bfe-441b-4667-be20-60867dd3a815\") " pod="openshift-must-gather-4dfrk/crc-debug-7vtt2" Nov 25 15:59:57 crc kubenswrapper[4731]: I1125 15:59:57.750879 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58284bfe-441b-4667-be20-60867dd3a815-host\") pod \"crc-debug-7vtt2\" (UID: \"58284bfe-441b-4667-be20-60867dd3a815\") " pod="openshift-must-gather-4dfrk/crc-debug-7vtt2" Nov 25 15:59:57 crc kubenswrapper[4731]: I1125 15:59:57.751028 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29p94\" (UniqueName: \"kubernetes.io/projected/58284bfe-441b-4667-be20-60867dd3a815-kube-api-access-29p94\") pod \"crc-debug-7vtt2\" (UID: \"58284bfe-441b-4667-be20-60867dd3a815\") " pod="openshift-must-gather-4dfrk/crc-debug-7vtt2" Nov 25 15:59:57 crc kubenswrapper[4731]: I1125 15:59:57.776499 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29p94\" (UniqueName: \"kubernetes.io/projected/58284bfe-441b-4667-be20-60867dd3a815-kube-api-access-29p94\") pod \"crc-debug-7vtt2\" (UID: \"58284bfe-441b-4667-be20-60867dd3a815\") " pod="openshift-must-gather-4dfrk/crc-debug-7vtt2" Nov 25 15:59:57 crc kubenswrapper[4731]: I1125 15:59:57.918733 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4dfrk/crc-debug-7vtt2" Nov 25 15:59:58 crc kubenswrapper[4731]: I1125 15:59:58.709577 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4dfrk/crc-debug-7vtt2" event={"ID":"58284bfe-441b-4667-be20-60867dd3a815","Type":"ContainerStarted","Data":"a1b82ea1aa83775b886ba83c77376f2df0cec8b4f8c37c8031d9b73a8607c080"} Nov 25 16:00:00 crc kubenswrapper[4731]: I1125 16:00:00.167247 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5"] Nov 25 16:00:00 crc kubenswrapper[4731]: I1125 16:00:00.169282 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" Nov 25 16:00:00 crc kubenswrapper[4731]: I1125 16:00:00.171482 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 16:00:00 crc kubenswrapper[4731]: I1125 16:00:00.171756 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 16:00:00 crc kubenswrapper[4731]: I1125 16:00:00.178706 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5"] Nov 25 16:00:00 crc kubenswrapper[4731]: I1125 16:00:00.312435 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b3d23b1-09f5-4957-872c-3599246fbc18-secret-volume\") pod \"collect-profiles-29401440-s6vq5\" (UID: \"5b3d23b1-09f5-4957-872c-3599246fbc18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" Nov 25 16:00:00 crc kubenswrapper[4731]: I1125 16:00:00.312530 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b3d23b1-09f5-4957-872c-3599246fbc18-config-volume\") pod \"collect-profiles-29401440-s6vq5\" (UID: \"5b3d23b1-09f5-4957-872c-3599246fbc18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" Nov 25 16:00:00 crc kubenswrapper[4731]: I1125 16:00:00.312587 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wkf9\" (UniqueName: \"kubernetes.io/projected/5b3d23b1-09f5-4957-872c-3599246fbc18-kube-api-access-4wkf9\") pod \"collect-profiles-29401440-s6vq5\" (UID: \"5b3d23b1-09f5-4957-872c-3599246fbc18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" Nov 25 16:00:00 crc kubenswrapper[4731]: I1125 16:00:00.414205 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b3d23b1-09f5-4957-872c-3599246fbc18-secret-volume\") pod \"collect-profiles-29401440-s6vq5\" (UID: \"5b3d23b1-09f5-4957-872c-3599246fbc18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" Nov 25 16:00:00 crc kubenswrapper[4731]: I1125 16:00:00.414337 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b3d23b1-09f5-4957-872c-3599246fbc18-config-volume\") pod \"collect-profiles-29401440-s6vq5\" (UID: \"5b3d23b1-09f5-4957-872c-3599246fbc18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" Nov 25 16:00:00 crc kubenswrapper[4731]: I1125 16:00:00.414430 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4wkf9\" (UniqueName: \"kubernetes.io/projected/5b3d23b1-09f5-4957-872c-3599246fbc18-kube-api-access-4wkf9\") pod \"collect-profiles-29401440-s6vq5\" (UID: \"5b3d23b1-09f5-4957-872c-3599246fbc18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" Nov 25 16:00:00 crc kubenswrapper[4731]: I1125 16:00:00.415658 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b3d23b1-09f5-4957-872c-3599246fbc18-config-volume\") pod \"collect-profiles-29401440-s6vq5\" (UID: \"5b3d23b1-09f5-4957-872c-3599246fbc18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" Nov 25 16:00:00 crc kubenswrapper[4731]: I1125 16:00:00.556235 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b3d23b1-09f5-4957-872c-3599246fbc18-secret-volume\") pod \"collect-profiles-29401440-s6vq5\" (UID: \"5b3d23b1-09f5-4957-872c-3599246fbc18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" Nov 25 16:00:00 crc kubenswrapper[4731]: I1125 16:00:00.557120 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4wkf9\" (UniqueName: \"kubernetes.io/projected/5b3d23b1-09f5-4957-872c-3599246fbc18-kube-api-access-4wkf9\") pod \"collect-profiles-29401440-s6vq5\" (UID: \"5b3d23b1-09f5-4957-872c-3599246fbc18\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" Nov 25 16:00:00 crc kubenswrapper[4731]: I1125 16:00:00.815877 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" Nov 25 16:00:01 crc kubenswrapper[4731]: I1125 16:00:01.362131 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5"] Nov 25 16:00:01 crc kubenswrapper[4731]: W1125 16:00:01.378160 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b3d23b1_09f5_4957_872c_3599246fbc18.slice/crio-ca32a40aff15889d4419f71373e7716eb442234beea15de5b9ba37522864940e WatchSource:0}: Error finding container ca32a40aff15889d4419f71373e7716eb442234beea15de5b9ba37522864940e: Status 404 returned error can't find the container with id ca32a40aff15889d4419f71373e7716eb442234beea15de5b9ba37522864940e Nov 25 16:00:01 crc kubenswrapper[4731]: I1125 16:00:01.746980 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" event={"ID":"5b3d23b1-09f5-4957-872c-3599246fbc18","Type":"ContainerStarted","Data":"2524b1f7e95596bfd1370bf664b85a2bbd227adf72282b237623a38ced8fd1c4"} Nov 25 16:00:01 crc kubenswrapper[4731]: I1125 16:00:01.747336 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" event={"ID":"5b3d23b1-09f5-4957-872c-3599246fbc18","Type":"ContainerStarted","Data":"ca32a40aff15889d4419f71373e7716eb442234beea15de5b9ba37522864940e"} Nov 25 16:00:01 crc kubenswrapper[4731]: I1125 16:00:01.767952 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" podStartSLOduration=1.7679359890000002 podStartE2EDuration="1.767935989s" podCreationTimestamp="2025-11-25 16:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 16:00:01.763900188 +0000 UTC m=+3960.511034628" watchObservedRunningTime="2025-11-25 16:00:01.767935989 +0000 UTC m=+3960.515070419" Nov 25 16:00:02 crc kubenswrapper[4731]: I1125 16:00:02.758493 4731 generic.go:334] "Generic (PLEG): container finished" podID="5b3d23b1-09f5-4957-872c-3599246fbc18" containerID="2524b1f7e95596bfd1370bf664b85a2bbd227adf72282b237623a38ced8fd1c4" exitCode=0 Nov 25 16:00:02 crc kubenswrapper[4731]: I1125 16:00:02.758598 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" event={"ID":"5b3d23b1-09f5-4957-872c-3599246fbc18","Type":"ContainerDied","Data":"2524b1f7e95596bfd1370bf664b85a2bbd227adf72282b237623a38ced8fd1c4"} Nov 25 16:00:04 crc kubenswrapper[4731]: I1125 16:00:04.272757 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" Nov 25 16:00:04 crc kubenswrapper[4731]: I1125 16:00:04.418606 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b3d23b1-09f5-4957-872c-3599246fbc18-config-volume\") pod \"5b3d23b1-09f5-4957-872c-3599246fbc18\" (UID: \"5b3d23b1-09f5-4957-872c-3599246fbc18\") " Nov 25 16:00:04 crc kubenswrapper[4731]: I1125 16:00:04.418703 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4wkf9\" (UniqueName: \"kubernetes.io/projected/5b3d23b1-09f5-4957-872c-3599246fbc18-kube-api-access-4wkf9\") pod \"5b3d23b1-09f5-4957-872c-3599246fbc18\" (UID: \"5b3d23b1-09f5-4957-872c-3599246fbc18\") " Nov 25 16:00:04 crc kubenswrapper[4731]: I1125 16:00:04.419441 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b3d23b1-09f5-4957-872c-3599246fbc18-config-volume" (OuterVolumeSpecName: "config-volume") pod "5b3d23b1-09f5-4957-872c-3599246fbc18" (UID: "5b3d23b1-09f5-4957-872c-3599246fbc18"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 16:00:04 crc kubenswrapper[4731]: I1125 16:00:04.418745 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b3d23b1-09f5-4957-872c-3599246fbc18-secret-volume\") pod \"5b3d23b1-09f5-4957-872c-3599246fbc18\" (UID: \"5b3d23b1-09f5-4957-872c-3599246fbc18\") " Nov 25 16:00:04 crc kubenswrapper[4731]: I1125 16:00:04.420591 4731 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b3d23b1-09f5-4957-872c-3599246fbc18-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 16:00:04 crc kubenswrapper[4731]: I1125 16:00:04.427314 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn"] Nov 25 16:00:04 crc kubenswrapper[4731]: I1125 16:00:04.429426 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b3d23b1-09f5-4957-872c-3599246fbc18-kube-api-access-4wkf9" (OuterVolumeSpecName: "kube-api-access-4wkf9") pod "5b3d23b1-09f5-4957-872c-3599246fbc18" (UID: "5b3d23b1-09f5-4957-872c-3599246fbc18"). InnerVolumeSpecName "kube-api-access-4wkf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:00:04 crc kubenswrapper[4731]: I1125 16:00:04.429433 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b3d23b1-09f5-4957-872c-3599246fbc18-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5b3d23b1-09f5-4957-872c-3599246fbc18" (UID: "5b3d23b1-09f5-4957-872c-3599246fbc18"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 16:00:04 crc kubenswrapper[4731]: I1125 16:00:04.438965 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401395-lwqjn"] Nov 25 16:00:04 crc kubenswrapper[4731]: I1125 16:00:04.522803 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4wkf9\" (UniqueName: \"kubernetes.io/projected/5b3d23b1-09f5-4957-872c-3599246fbc18-kube-api-access-4wkf9\") on node \"crc\" DevicePath \"\"" Nov 25 16:00:04 crc kubenswrapper[4731]: I1125 16:00:04.522866 4731 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b3d23b1-09f5-4957-872c-3599246fbc18-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 16:00:04 crc kubenswrapper[4731]: I1125 16:00:04.785842 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" event={"ID":"5b3d23b1-09f5-4957-872c-3599246fbc18","Type":"ContainerDied","Data":"ca32a40aff15889d4419f71373e7716eb442234beea15de5b9ba37522864940e"} Nov 25 16:00:04 crc kubenswrapper[4731]: I1125 16:00:04.785897 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca32a40aff15889d4419f71373e7716eb442234beea15de5b9ba37522864940e" Nov 25 16:00:04 crc kubenswrapper[4731]: I1125 16:00:04.785969 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401440-s6vq5" Nov 25 16:00:04 crc kubenswrapper[4731]: I1125 16:00:04.828766 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa" path="/var/lib/kubelet/pods/e0a7e2d6-c2ce-40f4-9b52-66ecdd265caa/volumes" Nov 25 16:00:08 crc kubenswrapper[4731]: I1125 16:00:08.812012 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 16:00:08 crc kubenswrapper[4731]: E1125 16:00:08.812867 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:00:12 crc kubenswrapper[4731]: I1125 16:00:12.888626 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4dfrk/crc-debug-7vtt2" event={"ID":"58284bfe-441b-4667-be20-60867dd3a815","Type":"ContainerStarted","Data":"f4b01ce6effcff0ed6a2674bac92c7b53ade12b59118951ae844e56ffc63d095"} Nov 25 16:00:12 crc kubenswrapper[4731]: I1125 16:00:12.904291 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-4dfrk/crc-debug-7vtt2" podStartSLOduration=2.192098021 podStartE2EDuration="15.904268168s" podCreationTimestamp="2025-11-25 15:59:57 +0000 UTC" firstStartedPulling="2025-11-25 15:59:57.959403353 +0000 UTC m=+3956.706537793" lastFinishedPulling="2025-11-25 16:00:11.67157351 +0000 UTC m=+3970.418707940" observedRunningTime="2025-11-25 16:00:12.900072814 +0000 UTC m=+3971.647207244" watchObservedRunningTime="2025-11-25 16:00:12.904268168 +0000 UTC m=+3971.651402598" Nov 25 16:00:17 crc kubenswrapper[4731]: I1125 16:00:17.255745 4731 scope.go:117] "RemoveContainer" containerID="0298863dd0030f2e6811edb297c770945b27c5d33571dc8608f54cd294a0134f" Nov 25 16:00:20 crc kubenswrapper[4731]: I1125 16:00:20.812187 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 16:00:24 crc kubenswrapper[4731]: I1125 16:00:24.863910 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rb6lz"] Nov 25 16:00:24 crc kubenswrapper[4731]: E1125 16:00:24.865619 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b3d23b1-09f5-4957-872c-3599246fbc18" containerName="collect-profiles" Nov 25 16:00:24 crc kubenswrapper[4731]: I1125 16:00:24.865711 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b3d23b1-09f5-4957-872c-3599246fbc18" containerName="collect-profiles" Nov 25 16:00:24 crc kubenswrapper[4731]: I1125 16:00:24.866267 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b3d23b1-09f5-4957-872c-3599246fbc18" containerName="collect-profiles" Nov 25 16:00:24 crc kubenswrapper[4731]: I1125 16:00:24.877461 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:00:24 crc kubenswrapper[4731]: I1125 16:00:24.880065 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rb6lz"] Nov 25 16:00:24 crc kubenswrapper[4731]: I1125 16:00:24.966355 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-utilities\") pod \"redhat-operators-rb6lz\" (UID: \"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b\") " pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:00:24 crc kubenswrapper[4731]: I1125 16:00:24.966596 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnf4n\" (UniqueName: \"kubernetes.io/projected/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-kube-api-access-bnf4n\") pod \"redhat-operators-rb6lz\" (UID: \"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b\") " pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:00:24 crc kubenswrapper[4731]: I1125 16:00:24.967119 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-catalog-content\") pod \"redhat-operators-rb6lz\" (UID: \"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b\") " pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:00:25 crc kubenswrapper[4731]: I1125 16:00:25.069144 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnf4n\" (UniqueName: \"kubernetes.io/projected/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-kube-api-access-bnf4n\") pod \"redhat-operators-rb6lz\" (UID: \"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b\") " pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:00:25 crc kubenswrapper[4731]: I1125 16:00:25.069333 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-catalog-content\") pod \"redhat-operators-rb6lz\" (UID: \"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b\") " pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:00:25 crc kubenswrapper[4731]: I1125 16:00:25.069390 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-utilities\") pod \"redhat-operators-rb6lz\" (UID: \"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b\") " pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:00:25 crc kubenswrapper[4731]: I1125 16:00:25.069980 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-utilities\") pod \"redhat-operators-rb6lz\" (UID: \"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b\") " pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:00:25 crc kubenswrapper[4731]: I1125 16:00:25.070609 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-catalog-content\") pod \"redhat-operators-rb6lz\" (UID: \"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b\") " pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:00:25 crc kubenswrapper[4731]: I1125 16:00:25.099745 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnf4n\" (UniqueName: \"kubernetes.io/projected/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-kube-api-access-bnf4n\") pod \"redhat-operators-rb6lz\" (UID: \"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b\") " pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:00:25 crc kubenswrapper[4731]: I1125 16:00:25.212135 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:00:29 crc kubenswrapper[4731]: I1125 16:00:29.667808 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rb6lz"] Nov 25 16:00:30 crc kubenswrapper[4731]: I1125 16:00:30.041676 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"483fad0053bc2b3bd7b0ec82c454f39b2e2e33b64ce6472c9ab41d7004caede0"} Nov 25 16:00:30 crc kubenswrapper[4731]: I1125 16:00:30.044243 4731 generic.go:334] "Generic (PLEG): container finished" podID="6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" containerID="2f269e55b94805405953940099b564a89707f64f334ef9c45a020b397ddedc93" exitCode=0 Nov 25 16:00:30 crc kubenswrapper[4731]: I1125 16:00:30.044283 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb6lz" event={"ID":"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b","Type":"ContainerDied","Data":"2f269e55b94805405953940099b564a89707f64f334ef9c45a020b397ddedc93"} Nov 25 16:00:30 crc kubenswrapper[4731]: I1125 16:00:30.044332 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb6lz" event={"ID":"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b","Type":"ContainerStarted","Data":"5d4cb2e059c20a667fb54a5e0f775b3697b483c156be44a9fd9cee4a861033a7"} Nov 25 16:00:30 crc kubenswrapper[4731]: I1125 16:00:30.047715 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 16:00:32 crc kubenswrapper[4731]: I1125 16:00:32.065362 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb6lz" event={"ID":"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b","Type":"ContainerStarted","Data":"c557dd1e9ea27e3b571e61b4532b005fee4947b3a6a6e091150ca904a5f07334"} Nov 25 16:00:43 crc kubenswrapper[4731]: I1125 16:00:43.201384 4731 generic.go:334] "Generic (PLEG): container finished" podID="6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" containerID="c557dd1e9ea27e3b571e61b4532b005fee4947b3a6a6e091150ca904a5f07334" exitCode=0 Nov 25 16:00:43 crc kubenswrapper[4731]: I1125 16:00:43.201478 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb6lz" event={"ID":"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b","Type":"ContainerDied","Data":"c557dd1e9ea27e3b571e61b4532b005fee4947b3a6a6e091150ca904a5f07334"} Nov 25 16:00:46 crc kubenswrapper[4731]: I1125 16:00:46.238664 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb6lz" event={"ID":"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b","Type":"ContainerStarted","Data":"e7261629d93e63840501c165e61693e740a9d597fc644323362ef44ce4ddfd08"} Nov 25 16:00:46 crc kubenswrapper[4731]: I1125 16:00:46.240480 4731 generic.go:334] "Generic (PLEG): container finished" podID="58284bfe-441b-4667-be20-60867dd3a815" containerID="f4b01ce6effcff0ed6a2674bac92c7b53ade12b59118951ae844e56ffc63d095" exitCode=0 Nov 25 16:00:46 crc kubenswrapper[4731]: I1125 16:00:46.240517 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4dfrk/crc-debug-7vtt2" event={"ID":"58284bfe-441b-4667-be20-60867dd3a815","Type":"ContainerDied","Data":"f4b01ce6effcff0ed6a2674bac92c7b53ade12b59118951ae844e56ffc63d095"} Nov 25 16:00:46 crc kubenswrapper[4731]: I1125 16:00:46.258543 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rb6lz" podStartSLOduration=8.152967533 podStartE2EDuration="22.258528475s" podCreationTimestamp="2025-11-25 16:00:24 +0000 UTC" firstStartedPulling="2025-11-25 16:00:30.047483609 +0000 UTC m=+3988.794618039" lastFinishedPulling="2025-11-25 16:00:44.153044551 +0000 UTC m=+4002.900178981" observedRunningTime="2025-11-25 16:00:46.255488539 +0000 UTC m=+4005.002622979" watchObservedRunningTime="2025-11-25 16:00:46.258528475 +0000 UTC m=+4005.005662905" Nov 25 16:00:47 crc kubenswrapper[4731]: I1125 16:00:47.376360 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4dfrk/crc-debug-7vtt2" Nov 25 16:00:47 crc kubenswrapper[4731]: I1125 16:00:47.417796 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4dfrk/crc-debug-7vtt2"] Nov 25 16:00:47 crc kubenswrapper[4731]: I1125 16:00:47.427328 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4dfrk/crc-debug-7vtt2"] Nov 25 16:00:47 crc kubenswrapper[4731]: I1125 16:00:47.456125 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29p94\" (UniqueName: \"kubernetes.io/projected/58284bfe-441b-4667-be20-60867dd3a815-kube-api-access-29p94\") pod \"58284bfe-441b-4667-be20-60867dd3a815\" (UID: \"58284bfe-441b-4667-be20-60867dd3a815\") " Nov 25 16:00:47 crc kubenswrapper[4731]: I1125 16:00:47.456389 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58284bfe-441b-4667-be20-60867dd3a815-host\") pod \"58284bfe-441b-4667-be20-60867dd3a815\" (UID: \"58284bfe-441b-4667-be20-60867dd3a815\") " Nov 25 16:00:47 crc kubenswrapper[4731]: I1125 16:00:47.456507 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/58284bfe-441b-4667-be20-60867dd3a815-host" (OuterVolumeSpecName: "host") pod "58284bfe-441b-4667-be20-60867dd3a815" (UID: "58284bfe-441b-4667-be20-60867dd3a815"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 16:00:47 crc kubenswrapper[4731]: I1125 16:00:47.456996 4731 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58284bfe-441b-4667-be20-60867dd3a815-host\") on node \"crc\" DevicePath \"\"" Nov 25 16:00:47 crc kubenswrapper[4731]: I1125 16:00:47.461713 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58284bfe-441b-4667-be20-60867dd3a815-kube-api-access-29p94" (OuterVolumeSpecName: "kube-api-access-29p94") pod "58284bfe-441b-4667-be20-60867dd3a815" (UID: "58284bfe-441b-4667-be20-60867dd3a815"). InnerVolumeSpecName "kube-api-access-29p94". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:00:47 crc kubenswrapper[4731]: I1125 16:00:47.559208 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29p94\" (UniqueName: \"kubernetes.io/projected/58284bfe-441b-4667-be20-60867dd3a815-kube-api-access-29p94\") on node \"crc\" DevicePath \"\"" Nov 25 16:00:48 crc kubenswrapper[4731]: I1125 16:00:48.264029 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1b82ea1aa83775b886ba83c77376f2df0cec8b4f8c37c8031d9b73a8607c080" Nov 25 16:00:48 crc kubenswrapper[4731]: I1125 16:00:48.264321 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4dfrk/crc-debug-7vtt2" Nov 25 16:00:48 crc kubenswrapper[4731]: I1125 16:00:48.606052 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-4dfrk/crc-debug-bt9lw"] Nov 25 16:00:48 crc kubenswrapper[4731]: E1125 16:00:48.606556 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58284bfe-441b-4667-be20-60867dd3a815" containerName="container-00" Nov 25 16:00:48 crc kubenswrapper[4731]: I1125 16:00:48.606808 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="58284bfe-441b-4667-be20-60867dd3a815" containerName="container-00" Nov 25 16:00:48 crc kubenswrapper[4731]: I1125 16:00:48.607094 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="58284bfe-441b-4667-be20-60867dd3a815" containerName="container-00" Nov 25 16:00:48 crc kubenswrapper[4731]: I1125 16:00:48.607827 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4dfrk/crc-debug-bt9lw" Nov 25 16:00:48 crc kubenswrapper[4731]: I1125 16:00:48.610062 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-4dfrk"/"default-dockercfg-59kj9" Nov 25 16:00:48 crc kubenswrapper[4731]: I1125 16:00:48.681900 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swph9\" (UniqueName: \"kubernetes.io/projected/e804afb1-d547-40e1-91c0-c338a50b490c-kube-api-access-swph9\") pod \"crc-debug-bt9lw\" (UID: \"e804afb1-d547-40e1-91c0-c338a50b490c\") " pod="openshift-must-gather-4dfrk/crc-debug-bt9lw" Nov 25 16:00:48 crc kubenswrapper[4731]: I1125 16:00:48.682007 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e804afb1-d547-40e1-91c0-c338a50b490c-host\") pod \"crc-debug-bt9lw\" (UID: \"e804afb1-d547-40e1-91c0-c338a50b490c\") " pod="openshift-must-gather-4dfrk/crc-debug-bt9lw" Nov 25 16:00:48 crc kubenswrapper[4731]: I1125 16:00:48.784489 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swph9\" (UniqueName: \"kubernetes.io/projected/e804afb1-d547-40e1-91c0-c338a50b490c-kube-api-access-swph9\") pod \"crc-debug-bt9lw\" (UID: \"e804afb1-d547-40e1-91c0-c338a50b490c\") " pod="openshift-must-gather-4dfrk/crc-debug-bt9lw" Nov 25 16:00:48 crc kubenswrapper[4731]: I1125 16:00:48.784820 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e804afb1-d547-40e1-91c0-c338a50b490c-host\") pod \"crc-debug-bt9lw\" (UID: \"e804afb1-d547-40e1-91c0-c338a50b490c\") " pod="openshift-must-gather-4dfrk/crc-debug-bt9lw" Nov 25 16:00:48 crc kubenswrapper[4731]: I1125 16:00:48.784965 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e804afb1-d547-40e1-91c0-c338a50b490c-host\") pod \"crc-debug-bt9lw\" (UID: \"e804afb1-d547-40e1-91c0-c338a50b490c\") " pod="openshift-must-gather-4dfrk/crc-debug-bt9lw" Nov 25 16:00:48 crc kubenswrapper[4731]: I1125 16:00:48.809653 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swph9\" (UniqueName: \"kubernetes.io/projected/e804afb1-d547-40e1-91c0-c338a50b490c-kube-api-access-swph9\") pod \"crc-debug-bt9lw\" (UID: \"e804afb1-d547-40e1-91c0-c338a50b490c\") " pod="openshift-must-gather-4dfrk/crc-debug-bt9lw" Nov 25 16:00:48 crc kubenswrapper[4731]: I1125 16:00:48.830908 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58284bfe-441b-4667-be20-60867dd3a815" path="/var/lib/kubelet/pods/58284bfe-441b-4667-be20-60867dd3a815/volumes" Nov 25 16:00:48 crc kubenswrapper[4731]: I1125 16:00:48.924529 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4dfrk/crc-debug-bt9lw" Nov 25 16:00:49 crc kubenswrapper[4731]: I1125 16:00:49.275646 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4dfrk/crc-debug-bt9lw" event={"ID":"e804afb1-d547-40e1-91c0-c338a50b490c","Type":"ContainerStarted","Data":"65b2a7a632e7555ab7e194f10586f50f10fdd8fdaeb35f31720d3e8340faa56a"} Nov 25 16:00:50 crc kubenswrapper[4731]: I1125 16:00:50.294212 4731 generic.go:334] "Generic (PLEG): container finished" podID="e804afb1-d547-40e1-91c0-c338a50b490c" containerID="aee9a8e7207625da8bb6b261b56a0e90638f3b348bd16fbb3b4d5c47b18395cb" exitCode=1 Nov 25 16:00:50 crc kubenswrapper[4731]: I1125 16:00:50.294299 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4dfrk/crc-debug-bt9lw" event={"ID":"e804afb1-d547-40e1-91c0-c338a50b490c","Type":"ContainerDied","Data":"aee9a8e7207625da8bb6b261b56a0e90638f3b348bd16fbb3b4d5c47b18395cb"} Nov 25 16:00:50 crc kubenswrapper[4731]: I1125 16:00:50.334913 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4dfrk/crc-debug-bt9lw"] Nov 25 16:00:50 crc kubenswrapper[4731]: I1125 16:00:50.347382 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4dfrk/crc-debug-bt9lw"] Nov 25 16:00:51 crc kubenswrapper[4731]: I1125 16:00:51.422855 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4dfrk/crc-debug-bt9lw" Nov 25 16:00:51 crc kubenswrapper[4731]: I1125 16:00:51.541272 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e804afb1-d547-40e1-91c0-c338a50b490c-host\") pod \"e804afb1-d547-40e1-91c0-c338a50b490c\" (UID: \"e804afb1-d547-40e1-91c0-c338a50b490c\") " Nov 25 16:00:51 crc kubenswrapper[4731]: I1125 16:00:51.541601 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swph9\" (UniqueName: \"kubernetes.io/projected/e804afb1-d547-40e1-91c0-c338a50b490c-kube-api-access-swph9\") pod \"e804afb1-d547-40e1-91c0-c338a50b490c\" (UID: \"e804afb1-d547-40e1-91c0-c338a50b490c\") " Nov 25 16:00:51 crc kubenswrapper[4731]: I1125 16:00:51.541420 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e804afb1-d547-40e1-91c0-c338a50b490c-host" (OuterVolumeSpecName: "host") pod "e804afb1-d547-40e1-91c0-c338a50b490c" (UID: "e804afb1-d547-40e1-91c0-c338a50b490c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 16:00:51 crc kubenswrapper[4731]: I1125 16:00:51.542426 4731 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e804afb1-d547-40e1-91c0-c338a50b490c-host\") on node \"crc\" DevicePath \"\"" Nov 25 16:00:51 crc kubenswrapper[4731]: I1125 16:00:51.548767 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e804afb1-d547-40e1-91c0-c338a50b490c-kube-api-access-swph9" (OuterVolumeSpecName: "kube-api-access-swph9") pod "e804afb1-d547-40e1-91c0-c338a50b490c" (UID: "e804afb1-d547-40e1-91c0-c338a50b490c"). InnerVolumeSpecName "kube-api-access-swph9". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:00:51 crc kubenswrapper[4731]: I1125 16:00:51.644456 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swph9\" (UniqueName: \"kubernetes.io/projected/e804afb1-d547-40e1-91c0-c338a50b490c-kube-api-access-swph9\") on node \"crc\" DevicePath \"\"" Nov 25 16:00:52 crc kubenswrapper[4731]: I1125 16:00:52.313484 4731 scope.go:117] "RemoveContainer" containerID="aee9a8e7207625da8bb6b261b56a0e90638f3b348bd16fbb3b4d5c47b18395cb" Nov 25 16:00:52 crc kubenswrapper[4731]: I1125 16:00:52.313523 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4dfrk/crc-debug-bt9lw" Nov 25 16:00:52 crc kubenswrapper[4731]: I1125 16:00:52.822419 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e804afb1-d547-40e1-91c0-c338a50b490c" path="/var/lib/kubelet/pods/e804afb1-d547-40e1-91c0-c338a50b490c/volumes" Nov 25 16:00:55 crc kubenswrapper[4731]: I1125 16:00:55.213143 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:00:55 crc kubenswrapper[4731]: I1125 16:00:55.214397 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:00:56 crc kubenswrapper[4731]: I1125 16:00:56.264568 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rb6lz" podUID="6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" containerName="registry-server" probeResult="failure" output=< Nov 25 16:00:56 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 16:00:56 crc kubenswrapper[4731]: > Nov 25 16:00:57 crc kubenswrapper[4731]: I1125 16:00:57.755957 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zz4hd"] Nov 25 16:00:57 crc kubenswrapper[4731]: E1125 16:00:57.756648 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e804afb1-d547-40e1-91c0-c338a50b490c" containerName="container-00" Nov 25 16:00:57 crc kubenswrapper[4731]: I1125 16:00:57.756664 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="e804afb1-d547-40e1-91c0-c338a50b490c" containerName="container-00" Nov 25 16:00:57 crc kubenswrapper[4731]: I1125 16:00:57.756893 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="e804afb1-d547-40e1-91c0-c338a50b490c" containerName="container-00" Nov 25 16:00:57 crc kubenswrapper[4731]: I1125 16:00:57.758345 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:00:57 crc kubenswrapper[4731]: I1125 16:00:57.772450 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zz4hd"] Nov 25 16:00:57 crc kubenswrapper[4731]: I1125 16:00:57.872463 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c504275d-bf6a-4a8a-a3e6-837da3007238-catalog-content\") pod \"community-operators-zz4hd\" (UID: \"c504275d-bf6a-4a8a-a3e6-837da3007238\") " pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:00:57 crc kubenswrapper[4731]: I1125 16:00:57.872641 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h8ln\" (UniqueName: \"kubernetes.io/projected/c504275d-bf6a-4a8a-a3e6-837da3007238-kube-api-access-6h8ln\") pod \"community-operators-zz4hd\" (UID: \"c504275d-bf6a-4a8a-a3e6-837da3007238\") " pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:00:57 crc kubenswrapper[4731]: I1125 16:00:57.872733 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c504275d-bf6a-4a8a-a3e6-837da3007238-utilities\") pod \"community-operators-zz4hd\" (UID: \"c504275d-bf6a-4a8a-a3e6-837da3007238\") " pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:00:57 crc kubenswrapper[4731]: I1125 16:00:57.974593 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6h8ln\" (UniqueName: \"kubernetes.io/projected/c504275d-bf6a-4a8a-a3e6-837da3007238-kube-api-access-6h8ln\") pod \"community-operators-zz4hd\" (UID: \"c504275d-bf6a-4a8a-a3e6-837da3007238\") " pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:00:57 crc kubenswrapper[4731]: I1125 16:00:57.974697 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c504275d-bf6a-4a8a-a3e6-837da3007238-utilities\") pod \"community-operators-zz4hd\" (UID: \"c504275d-bf6a-4a8a-a3e6-837da3007238\") " pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:00:57 crc kubenswrapper[4731]: I1125 16:00:57.974834 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c504275d-bf6a-4a8a-a3e6-837da3007238-catalog-content\") pod \"community-operators-zz4hd\" (UID: \"c504275d-bf6a-4a8a-a3e6-837da3007238\") " pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:00:57 crc kubenswrapper[4731]: I1125 16:00:57.975288 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c504275d-bf6a-4a8a-a3e6-837da3007238-utilities\") pod \"community-operators-zz4hd\" (UID: \"c504275d-bf6a-4a8a-a3e6-837da3007238\") " pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:00:57 crc kubenswrapper[4731]: I1125 16:00:57.975353 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c504275d-bf6a-4a8a-a3e6-837da3007238-catalog-content\") pod \"community-operators-zz4hd\" (UID: \"c504275d-bf6a-4a8a-a3e6-837da3007238\") " pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:00:58 crc kubenswrapper[4731]: I1125 16:00:58.007561 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6h8ln\" (UniqueName: \"kubernetes.io/projected/c504275d-bf6a-4a8a-a3e6-837da3007238-kube-api-access-6h8ln\") pod \"community-operators-zz4hd\" (UID: \"c504275d-bf6a-4a8a-a3e6-837da3007238\") " pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:00:58 crc kubenswrapper[4731]: I1125 16:00:58.078558 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:00:58 crc kubenswrapper[4731]: W1125 16:00:58.756270 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc504275d_bf6a_4a8a_a3e6_837da3007238.slice/crio-d6daa1340b11d72a39ff76a3ad5da8f235eda7f030862354d3f96b0244f18c2f WatchSource:0}: Error finding container d6daa1340b11d72a39ff76a3ad5da8f235eda7f030862354d3f96b0244f18c2f: Status 404 returned error can't find the container with id d6daa1340b11d72a39ff76a3ad5da8f235eda7f030862354d3f96b0244f18c2f Nov 25 16:00:58 crc kubenswrapper[4731]: I1125 16:00:58.756316 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zz4hd"] Nov 25 16:00:59 crc kubenswrapper[4731]: I1125 16:00:59.405971 4731 generic.go:334] "Generic (PLEG): container finished" podID="c504275d-bf6a-4a8a-a3e6-837da3007238" containerID="0a37e6b4cab382950264a82dc4eb8339d660d5af68b988d9acbf336dd86d19fd" exitCode=0 Nov 25 16:00:59 crc kubenswrapper[4731]: I1125 16:00:59.406279 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz4hd" event={"ID":"c504275d-bf6a-4a8a-a3e6-837da3007238","Type":"ContainerDied","Data":"0a37e6b4cab382950264a82dc4eb8339d660d5af68b988d9acbf336dd86d19fd"} Nov 25 16:00:59 crc kubenswrapper[4731]: I1125 16:00:59.406306 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz4hd" event={"ID":"c504275d-bf6a-4a8a-a3e6-837da3007238","Type":"ContainerStarted","Data":"d6daa1340b11d72a39ff76a3ad5da8f235eda7f030862354d3f96b0244f18c2f"} Nov 25 16:01:00 crc kubenswrapper[4731]: I1125 16:01:00.153433 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29401441-6mw2n"] Nov 25 16:01:00 crc kubenswrapper[4731]: I1125 16:01:00.155471 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401441-6mw2n" Nov 25 16:01:00 crc kubenswrapper[4731]: I1125 16:01:00.178929 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401441-6mw2n"] Nov 25 16:01:00 crc kubenswrapper[4731]: I1125 16:01:00.223657 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4zkg\" (UniqueName: \"kubernetes.io/projected/c1605aa1-c529-4c48-ac65-895048226e49-kube-api-access-z4zkg\") pod \"keystone-cron-29401441-6mw2n\" (UID: \"c1605aa1-c529-4c48-ac65-895048226e49\") " pod="openstack/keystone-cron-29401441-6mw2n" Nov 25 16:01:00 crc kubenswrapper[4731]: I1125 16:01:00.224022 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-fernet-keys\") pod \"keystone-cron-29401441-6mw2n\" (UID: \"c1605aa1-c529-4c48-ac65-895048226e49\") " pod="openstack/keystone-cron-29401441-6mw2n" Nov 25 16:01:00 crc kubenswrapper[4731]: I1125 16:01:00.224200 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-combined-ca-bundle\") pod \"keystone-cron-29401441-6mw2n\" (UID: \"c1605aa1-c529-4c48-ac65-895048226e49\") " pod="openstack/keystone-cron-29401441-6mw2n" Nov 25 16:01:00 crc kubenswrapper[4731]: I1125 16:01:00.224289 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-config-data\") pod \"keystone-cron-29401441-6mw2n\" (UID: \"c1605aa1-c529-4c48-ac65-895048226e49\") " pod="openstack/keystone-cron-29401441-6mw2n" Nov 25 16:01:00 crc kubenswrapper[4731]: I1125 16:01:00.326689 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4zkg\" (UniqueName: \"kubernetes.io/projected/c1605aa1-c529-4c48-ac65-895048226e49-kube-api-access-z4zkg\") pod \"keystone-cron-29401441-6mw2n\" (UID: \"c1605aa1-c529-4c48-ac65-895048226e49\") " pod="openstack/keystone-cron-29401441-6mw2n" Nov 25 16:01:00 crc kubenswrapper[4731]: I1125 16:01:00.327052 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-fernet-keys\") pod \"keystone-cron-29401441-6mw2n\" (UID: \"c1605aa1-c529-4c48-ac65-895048226e49\") " pod="openstack/keystone-cron-29401441-6mw2n" Nov 25 16:01:00 crc kubenswrapper[4731]: I1125 16:01:00.327234 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-combined-ca-bundle\") pod \"keystone-cron-29401441-6mw2n\" (UID: \"c1605aa1-c529-4c48-ac65-895048226e49\") " pod="openstack/keystone-cron-29401441-6mw2n" Nov 25 16:01:00 crc kubenswrapper[4731]: I1125 16:01:00.327347 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-config-data\") pod \"keystone-cron-29401441-6mw2n\" (UID: \"c1605aa1-c529-4c48-ac65-895048226e49\") " pod="openstack/keystone-cron-29401441-6mw2n" Nov 25 16:01:00 crc kubenswrapper[4731]: I1125 16:01:00.334512 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-config-data\") pod \"keystone-cron-29401441-6mw2n\" (UID: \"c1605aa1-c529-4c48-ac65-895048226e49\") " pod="openstack/keystone-cron-29401441-6mw2n" Nov 25 16:01:00 crc kubenswrapper[4731]: I1125 16:01:00.334832 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-fernet-keys\") pod \"keystone-cron-29401441-6mw2n\" (UID: \"c1605aa1-c529-4c48-ac65-895048226e49\") " pod="openstack/keystone-cron-29401441-6mw2n" Nov 25 16:01:00 crc kubenswrapper[4731]: I1125 16:01:00.346551 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-combined-ca-bundle\") pod \"keystone-cron-29401441-6mw2n\" (UID: \"c1605aa1-c529-4c48-ac65-895048226e49\") " pod="openstack/keystone-cron-29401441-6mw2n" Nov 25 16:01:00 crc kubenswrapper[4731]: I1125 16:01:00.347207 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4zkg\" (UniqueName: \"kubernetes.io/projected/c1605aa1-c529-4c48-ac65-895048226e49-kube-api-access-z4zkg\") pod \"keystone-cron-29401441-6mw2n\" (UID: \"c1605aa1-c529-4c48-ac65-895048226e49\") " pod="openstack/keystone-cron-29401441-6mw2n" Nov 25 16:01:00 crc kubenswrapper[4731]: I1125 16:01:00.477261 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401441-6mw2n" Nov 25 16:01:01 crc kubenswrapper[4731]: I1125 16:01:01.006894 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29401441-6mw2n"] Nov 25 16:01:01 crc kubenswrapper[4731]: I1125 16:01:01.435526 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz4hd" event={"ID":"c504275d-bf6a-4a8a-a3e6-837da3007238","Type":"ContainerStarted","Data":"8c1cc3e247ed2f84e3f0689ccc4e4bab377e4f32f82623e359fc791bd886ff7e"} Nov 25 16:01:01 crc kubenswrapper[4731]: I1125 16:01:01.437624 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401441-6mw2n" event={"ID":"c1605aa1-c529-4c48-ac65-895048226e49","Type":"ContainerStarted","Data":"7abeaf5c91e0ce0bd4590a993fec11eab5d046ae0eab6aa0aad0bc3d079f3b1b"} Nov 25 16:01:01 crc kubenswrapper[4731]: I1125 16:01:01.437658 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401441-6mw2n" event={"ID":"c1605aa1-c529-4c48-ac65-895048226e49","Type":"ContainerStarted","Data":"3c72afa8aa06d64023acdbaac8a391a8d786680af91cc16aadc4b4dab0e7db8e"} Nov 25 16:01:05 crc kubenswrapper[4731]: I1125 16:01:05.477398 4731 generic.go:334] "Generic (PLEG): container finished" podID="c504275d-bf6a-4a8a-a3e6-837da3007238" containerID="8c1cc3e247ed2f84e3f0689ccc4e4bab377e4f32f82623e359fc791bd886ff7e" exitCode=0 Nov 25 16:01:05 crc kubenswrapper[4731]: I1125 16:01:05.477482 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz4hd" event={"ID":"c504275d-bf6a-4a8a-a3e6-837da3007238","Type":"ContainerDied","Data":"8c1cc3e247ed2f84e3f0689ccc4e4bab377e4f32f82623e359fc791bd886ff7e"} Nov 25 16:01:05 crc kubenswrapper[4731]: I1125 16:01:05.503452 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29401441-6mw2n" podStartSLOduration=5.503426906 podStartE2EDuration="5.503426906s" podCreationTimestamp="2025-11-25 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 16:01:01.498129414 +0000 UTC m=+4020.245263854" watchObservedRunningTime="2025-11-25 16:01:05.503426906 +0000 UTC m=+4024.250561336" Nov 25 16:01:06 crc kubenswrapper[4731]: I1125 16:01:06.269173 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rb6lz" podUID="6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" containerName="registry-server" probeResult="failure" output=< Nov 25 16:01:06 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 16:01:06 crc kubenswrapper[4731]: > Nov 25 16:01:07 crc kubenswrapper[4731]: I1125 16:01:07.519532 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz4hd" event={"ID":"c504275d-bf6a-4a8a-a3e6-837da3007238","Type":"ContainerStarted","Data":"fd5c8f7dcb85977d44110059f3b3cceecf780ffd11f19b78bcb739246d8f2f6b"} Nov 25 16:01:07 crc kubenswrapper[4731]: I1125 16:01:07.552108 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zz4hd" podStartSLOduration=3.521581126 podStartE2EDuration="10.552087841s" podCreationTimestamp="2025-11-25 16:00:57 +0000 UTC" firstStartedPulling="2025-11-25 16:00:59.408091506 +0000 UTC m=+4018.155225936" lastFinishedPulling="2025-11-25 16:01:06.438598221 +0000 UTC m=+4025.185732651" observedRunningTime="2025-11-25 16:01:07.544357468 +0000 UTC m=+4026.291491898" watchObservedRunningTime="2025-11-25 16:01:07.552087841 +0000 UTC m=+4026.299222261" Nov 25 16:01:08 crc kubenswrapper[4731]: I1125 16:01:08.079378 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:01:08 crc kubenswrapper[4731]: I1125 16:01:08.079529 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:01:09 crc kubenswrapper[4731]: I1125 16:01:09.139463 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zz4hd" podUID="c504275d-bf6a-4a8a-a3e6-837da3007238" containerName="registry-server" probeResult="failure" output=< Nov 25 16:01:09 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 16:01:09 crc kubenswrapper[4731]: > Nov 25 16:01:10 crc kubenswrapper[4731]: I1125 16:01:10.557126 4731 generic.go:334] "Generic (PLEG): container finished" podID="c1605aa1-c529-4c48-ac65-895048226e49" containerID="7abeaf5c91e0ce0bd4590a993fec11eab5d046ae0eab6aa0aad0bc3d079f3b1b" exitCode=0 Nov 25 16:01:10 crc kubenswrapper[4731]: I1125 16:01:10.557233 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401441-6mw2n" event={"ID":"c1605aa1-c529-4c48-ac65-895048226e49","Type":"ContainerDied","Data":"7abeaf5c91e0ce0bd4590a993fec11eab5d046ae0eab6aa0aad0bc3d079f3b1b"} Nov 25 16:01:11 crc kubenswrapper[4731]: I1125 16:01:11.892196 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401441-6mw2n" Nov 25 16:01:11 crc kubenswrapper[4731]: I1125 16:01:11.969687 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-config-data\") pod \"c1605aa1-c529-4c48-ac65-895048226e49\" (UID: \"c1605aa1-c529-4c48-ac65-895048226e49\") " Nov 25 16:01:11 crc kubenswrapper[4731]: I1125 16:01:11.970205 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4zkg\" (UniqueName: \"kubernetes.io/projected/c1605aa1-c529-4c48-ac65-895048226e49-kube-api-access-z4zkg\") pod \"c1605aa1-c529-4c48-ac65-895048226e49\" (UID: \"c1605aa1-c529-4c48-ac65-895048226e49\") " Nov 25 16:01:11 crc kubenswrapper[4731]: I1125 16:01:11.970255 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-fernet-keys\") pod \"c1605aa1-c529-4c48-ac65-895048226e49\" (UID: \"c1605aa1-c529-4c48-ac65-895048226e49\") " Nov 25 16:01:11 crc kubenswrapper[4731]: I1125 16:01:11.970294 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-combined-ca-bundle\") pod \"c1605aa1-c529-4c48-ac65-895048226e49\" (UID: \"c1605aa1-c529-4c48-ac65-895048226e49\") " Nov 25 16:01:11 crc kubenswrapper[4731]: I1125 16:01:11.977620 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c1605aa1-c529-4c48-ac65-895048226e49" (UID: "c1605aa1-c529-4c48-ac65-895048226e49"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 16:01:11 crc kubenswrapper[4731]: I1125 16:01:11.978752 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1605aa1-c529-4c48-ac65-895048226e49-kube-api-access-z4zkg" (OuterVolumeSpecName: "kube-api-access-z4zkg") pod "c1605aa1-c529-4c48-ac65-895048226e49" (UID: "c1605aa1-c529-4c48-ac65-895048226e49"). InnerVolumeSpecName "kube-api-access-z4zkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:01:12 crc kubenswrapper[4731]: I1125 16:01:12.004765 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c1605aa1-c529-4c48-ac65-895048226e49" (UID: "c1605aa1-c529-4c48-ac65-895048226e49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 16:01:12 crc kubenswrapper[4731]: I1125 16:01:12.036999 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-config-data" (OuterVolumeSpecName: "config-data") pod "c1605aa1-c529-4c48-ac65-895048226e49" (UID: "c1605aa1-c529-4c48-ac65-895048226e49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 16:01:12 crc kubenswrapper[4731]: I1125 16:01:12.073554 4731 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-fernet-keys\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:12 crc kubenswrapper[4731]: I1125 16:01:12.073597 4731 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:12 crc kubenswrapper[4731]: I1125 16:01:12.073612 4731 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1605aa1-c529-4c48-ac65-895048226e49-config-data\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:12 crc kubenswrapper[4731]: I1125 16:01:12.073624 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4zkg\" (UniqueName: \"kubernetes.io/projected/c1605aa1-c529-4c48-ac65-895048226e49-kube-api-access-z4zkg\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:12 crc kubenswrapper[4731]: I1125 16:01:12.594331 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29401441-6mw2n" event={"ID":"c1605aa1-c529-4c48-ac65-895048226e49","Type":"ContainerDied","Data":"3c72afa8aa06d64023acdbaac8a391a8d786680af91cc16aadc4b4dab0e7db8e"} Nov 25 16:01:12 crc kubenswrapper[4731]: I1125 16:01:12.594726 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c72afa8aa06d64023acdbaac8a391a8d786680af91cc16aadc4b4dab0e7db8e" Nov 25 16:01:12 crc kubenswrapper[4731]: I1125 16:01:12.594893 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29401441-6mw2n" Nov 25 16:01:16 crc kubenswrapper[4731]: I1125 16:01:16.258847 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rb6lz" podUID="6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" containerName="registry-server" probeResult="failure" output=< Nov 25 16:01:16 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 16:01:16 crc kubenswrapper[4731]: > Nov 25 16:01:18 crc kubenswrapper[4731]: I1125 16:01:18.141276 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:01:18 crc kubenswrapper[4731]: I1125 16:01:18.194509 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:01:25 crc kubenswrapper[4731]: I1125 16:01:25.830908 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a/init-config-reloader/0.log" Nov 25 16:01:26 crc kubenswrapper[4731]: I1125 16:01:26.064500 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a/init-config-reloader/0.log" Nov 25 16:01:26 crc kubenswrapper[4731]: I1125 16:01:26.098476 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a/alertmanager/0.log" Nov 25 16:01:26 crc kubenswrapper[4731]: I1125 16:01:26.105935 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a/config-reloader/0.log" Nov 25 16:01:26 crc kubenswrapper[4731]: I1125 16:01:26.279932 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e8bbb605-448a-4d7a-a7b1-55d3cacec4f7/aodh-api/0.log" Nov 25 16:01:26 crc kubenswrapper[4731]: I1125 16:01:26.322337 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e8bbb605-448a-4d7a-a7b1-55d3cacec4f7/aodh-evaluator/0.log" Nov 25 16:01:26 crc kubenswrapper[4731]: I1125 16:01:26.345652 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e8bbb605-448a-4d7a-a7b1-55d3cacec4f7/aodh-listener/0.log" Nov 25 16:01:26 crc kubenswrapper[4731]: I1125 16:01:26.455075 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e8bbb605-448a-4d7a-a7b1-55d3cacec4f7/aodh-notifier/0.log" Nov 25 16:01:26 crc kubenswrapper[4731]: I1125 16:01:26.521909 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6dc4f8989b-lbfrn_5b11609c-ee96-473a-b1e6-dbbf07a9bab2/barbican-api/0.log" Nov 25 16:01:26 crc kubenswrapper[4731]: I1125 16:01:26.592724 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zz4hd"] Nov 25 16:01:26 crc kubenswrapper[4731]: I1125 16:01:26.593389 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zz4hd" podUID="c504275d-bf6a-4a8a-a3e6-837da3007238" containerName="registry-server" containerID="cri-o://fd5c8f7dcb85977d44110059f3b3cceecf780ffd11f19b78bcb739246d8f2f6b" gracePeriod=2 Nov 25 16:01:26 crc kubenswrapper[4731]: I1125 16:01:26.610554 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rb6lz" podUID="6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" containerName="registry-server" probeResult="failure" output=< Nov 25 16:01:26 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 16:01:26 crc kubenswrapper[4731]: > Nov 25 16:01:26 crc kubenswrapper[4731]: I1125 16:01:26.642180 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6dc4f8989b-lbfrn_5b11609c-ee96-473a-b1e6-dbbf07a9bab2/barbican-api-log/0.log" Nov 25 16:01:26 crc kubenswrapper[4731]: I1125 16:01:26.798637 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5c7d6d4554-f24zm_b8ba4d2e-6323-48b0-b781-313cab464832/barbican-keystone-listener/0.log" Nov 25 16:01:26 crc kubenswrapper[4731]: I1125 16:01:26.807159 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5c7d6d4554-f24zm_b8ba4d2e-6323-48b0-b781-313cab464832/barbican-keystone-listener-log/0.log" Nov 25 16:01:26 crc kubenswrapper[4731]: I1125 16:01:26.927377 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dd99f9879-qskfg_4493c299-fefe-4934-b370-f2270a1b6226/barbican-worker/0.log" Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.043115 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dd99f9879-qskfg_4493c299-fefe-4934-b370-f2270a1b6226/barbican-worker-log/0.log" Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.215508 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf_141271e2-9df3-449f-9590-6d7c43c8563b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.746272 4731 generic.go:334] "Generic (PLEG): container finished" podID="c504275d-bf6a-4a8a-a3e6-837da3007238" containerID="fd5c8f7dcb85977d44110059f3b3cceecf780ffd11f19b78bcb739246d8f2f6b" exitCode=0 Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.746422 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz4hd" event={"ID":"c504275d-bf6a-4a8a-a3e6-837da3007238","Type":"ContainerDied","Data":"fd5c8f7dcb85977d44110059f3b3cceecf780ffd11f19b78bcb739246d8f2f6b"} Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.746618 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zz4hd" event={"ID":"c504275d-bf6a-4a8a-a3e6-837da3007238","Type":"ContainerDied","Data":"d6daa1340b11d72a39ff76a3ad5da8f235eda7f030862354d3f96b0244f18c2f"} Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.746639 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6daa1340b11d72a39ff76a3ad5da8f235eda7f030862354d3f96b0244f18c2f" Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.763610 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.774542 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2d6f5688-5558-47ce-87a9-01d3f428c541/ceilometer-notification-agent/0.log" Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.802686 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2d6f5688-5558-47ce-87a9-01d3f428c541/sg-core/0.log" Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.834275 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2d6f5688-5558-47ce-87a9-01d3f428c541/ceilometer-central-agent/0.log" Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.845845 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2d6f5688-5558-47ce-87a9-01d3f428c541/proxy-httpd/0.log" Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.912480 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c504275d-bf6a-4a8a-a3e6-837da3007238-utilities\") pod \"c504275d-bf6a-4a8a-a3e6-837da3007238\" (UID: \"c504275d-bf6a-4a8a-a3e6-837da3007238\") " Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.912850 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c504275d-bf6a-4a8a-a3e6-837da3007238-catalog-content\") pod \"c504275d-bf6a-4a8a-a3e6-837da3007238\" (UID: \"c504275d-bf6a-4a8a-a3e6-837da3007238\") " Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.913032 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6h8ln\" (UniqueName: \"kubernetes.io/projected/c504275d-bf6a-4a8a-a3e6-837da3007238-kube-api-access-6h8ln\") pod \"c504275d-bf6a-4a8a-a3e6-837da3007238\" (UID: \"c504275d-bf6a-4a8a-a3e6-837da3007238\") " Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.913483 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c504275d-bf6a-4a8a-a3e6-837da3007238-utilities" (OuterVolumeSpecName: "utilities") pod "c504275d-bf6a-4a8a-a3e6-837da3007238" (UID: "c504275d-bf6a-4a8a-a3e6-837da3007238"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.913723 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c504275d-bf6a-4a8a-a3e6-837da3007238-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:27 crc kubenswrapper[4731]: I1125 16:01:27.935539 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c504275d-bf6a-4a8a-a3e6-837da3007238-kube-api-access-6h8ln" (OuterVolumeSpecName: "kube-api-access-6h8ln") pod "c504275d-bf6a-4a8a-a3e6-837da3007238" (UID: "c504275d-bf6a-4a8a-a3e6-837da3007238"). InnerVolumeSpecName "kube-api-access-6h8ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:01:28 crc kubenswrapper[4731]: I1125 16:01:28.015317 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6h8ln\" (UniqueName: \"kubernetes.io/projected/c504275d-bf6a-4a8a-a3e6-837da3007238-kube-api-access-6h8ln\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:28 crc kubenswrapper[4731]: I1125 16:01:28.034558 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c504275d-bf6a-4a8a-a3e6-837da3007238-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c504275d-bf6a-4a8a-a3e6-837da3007238" (UID: "c504275d-bf6a-4a8a-a3e6-837da3007238"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:01:28 crc kubenswrapper[4731]: I1125 16:01:28.117468 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c504275d-bf6a-4a8a-a3e6-837da3007238-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:28 crc kubenswrapper[4731]: I1125 16:01:28.134000 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3/cinder-api/0.log" Nov 25 16:01:28 crc kubenswrapper[4731]: I1125 16:01:28.142281 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3/cinder-api-log/0.log" Nov 25 16:01:28 crc kubenswrapper[4731]: I1125 16:01:28.366739 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_23096767-3b8d-4153-8d68-0285b66b2b1b/cinder-scheduler/0.log" Nov 25 16:01:28 crc kubenswrapper[4731]: I1125 16:01:28.423316 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_23096767-3b8d-4153-8d68-0285b66b2b1b/probe/0.log" Nov 25 16:01:28 crc kubenswrapper[4731]: I1125 16:01:28.436559 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5_75b68a16-e019-49f8-89b7-daa43f7c1fa0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:01:28 crc kubenswrapper[4731]: I1125 16:01:28.682487 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd_6c7d1dc4-dbc1-4d90-a697-4c4600bb041e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:01:28 crc kubenswrapper[4731]: I1125 16:01:28.752726 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-n9ptm_fb99c57e-0890-4b05-9072-311048298a07/init/0.log" Nov 25 16:01:28 crc kubenswrapper[4731]: I1125 16:01:28.754573 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zz4hd" Nov 25 16:01:28 crc kubenswrapper[4731]: I1125 16:01:28.798403 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zz4hd"] Nov 25 16:01:28 crc kubenswrapper[4731]: I1125 16:01:28.809586 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zz4hd"] Nov 25 16:01:28 crc kubenswrapper[4731]: I1125 16:01:28.828789 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c504275d-bf6a-4a8a-a3e6-837da3007238" path="/var/lib/kubelet/pods/c504275d-bf6a-4a8a-a3e6-837da3007238/volumes" Nov 25 16:01:28 crc kubenswrapper[4731]: I1125 16:01:28.914697 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-n9ptm_fb99c57e-0890-4b05-9072-311048298a07/init/0.log" Nov 25 16:01:29 crc kubenswrapper[4731]: I1125 16:01:29.011744 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-n9ptm_fb99c57e-0890-4b05-9072-311048298a07/dnsmasq-dns/0.log" Nov 25 16:01:29 crc kubenswrapper[4731]: I1125 16:01:29.049022 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm_0e902f4e-eb88-45a6-8150-73c6bce43b9f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:01:29 crc kubenswrapper[4731]: I1125 16:01:29.266739 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f6e0de1e-5e50-468c-a911-41b1c3e7261a/glance-httpd/0.log" Nov 25 16:01:29 crc kubenswrapper[4731]: I1125 16:01:29.280790 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f6e0de1e-5e50-468c-a911-41b1c3e7261a/glance-log/0.log" Nov 25 16:01:29 crc kubenswrapper[4731]: I1125 16:01:29.361577 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b713395b-6feb-411a-b13b-d35890a8b2d3/glance-httpd/0.log" Nov 25 16:01:29 crc kubenswrapper[4731]: I1125 16:01:29.429792 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b713395b-6feb-411a-b13b-d35890a8b2d3/glance-log/0.log" Nov 25 16:01:29 crc kubenswrapper[4731]: I1125 16:01:29.903886 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-574677c86-dzlj9_21056706-0226-48f6-bbe2-5a8c5fb49c7d/heat-engine/0.log" Nov 25 16:01:30 crc kubenswrapper[4731]: I1125 16:01:30.010738 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g_deaabd87-d775-4f26-bf8e-c515967c2867/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:01:30 crc kubenswrapper[4731]: I1125 16:01:30.045579 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-cdf8597bf-t8vdc_c7de698f-a9bd-4231-9cee-dd0c1c93e461/heat-api/0.log" Nov 25 16:01:30 crc kubenswrapper[4731]: I1125 16:01:30.046212 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-8cf475f5-wpg2b_f49ff135-870b-4b14-8ec3-a20bec8b9ba6/heat-cfnapi/0.log" Nov 25 16:01:30 crc kubenswrapper[4731]: I1125 16:01:30.088561 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lls57_026d447e-1194-4316-b484-6525a214f761/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:01:30 crc kubenswrapper[4731]: I1125 16:01:30.331985 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401441-6mw2n_c1605aa1-c529-4c48-ac65-895048226e49/keystone-cron/0.log" Nov 25 16:01:30 crc kubenswrapper[4731]: I1125 16:01:30.367476 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-65d859dcf6-bblhv_3f147555-b406-4784-8ee3-b25c2521ce34/keystone-api/0.log" Nov 25 16:01:30 crc kubenswrapper[4731]: I1125 16:01:30.468514 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_57141195-3615-4e7f-b643-96c727cb9070/kube-state-metrics/0.log" Nov 25 16:01:30 crc kubenswrapper[4731]: I1125 16:01:30.566956 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-gxthr_90c7443c-b0ed-4432-8a15-f1d26a2161e9/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:01:30 crc kubenswrapper[4731]: I1125 16:01:30.881984 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-545595f45c-m798f_e70199f9-3fcf-4b0f-98c5-c45faf62c604/neutron-httpd/0.log" Nov 25 16:01:30 crc kubenswrapper[4731]: I1125 16:01:30.905649 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-545595f45c-m798f_e70199f9-3fcf-4b0f-98c5-c45faf62c604/neutron-api/0.log" Nov 25 16:01:30 crc kubenswrapper[4731]: I1125 16:01:30.986428 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6_62e629fe-ccc6-4323-b226-1bc0b4591c2a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:01:31 crc kubenswrapper[4731]: I1125 16:01:31.309788 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f/nova-api-log/0.log" Nov 25 16:01:31 crc kubenswrapper[4731]: I1125 16:01:31.510683 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3/nova-cell0-conductor-conductor/0.log" Nov 25 16:01:31 crc kubenswrapper[4731]: I1125 16:01:31.586265 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f/nova-api-api/0.log" Nov 25 16:01:31 crc kubenswrapper[4731]: I1125 16:01:31.697338 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_cfc2b506-8d10-4cfc-ae43-0978926809ac/nova-cell1-conductor-conductor/0.log" Nov 25 16:01:31 crc kubenswrapper[4731]: I1125 16:01:31.887006 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_d1ab61a1-85f7-4bf9-93f8-598cc80c8283/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 16:01:32 crc kubenswrapper[4731]: I1125 16:01:32.058498 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-9jzw8_6ca6a8fb-1ee3-40d3-b8c7-733438b20c35/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:01:32 crc kubenswrapper[4731]: I1125 16:01:32.207265 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_209fb0e3-5c2d-4e1d-b735-432bc80aa947/nova-metadata-log/0.log" Nov 25 16:01:32 crc kubenswrapper[4731]: I1125 16:01:32.500137 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_250bffb8-9700-49f5-a1a2-39577b5bcba3/nova-scheduler-scheduler/0.log" Nov 25 16:01:32 crc kubenswrapper[4731]: I1125 16:01:32.569369 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_600ccfb1-2789-4b20-8012-9b6bdbf836ff/mysql-bootstrap/0.log" Nov 25 16:01:32 crc kubenswrapper[4731]: I1125 16:01:32.771446 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_600ccfb1-2789-4b20-8012-9b6bdbf836ff/galera/0.log" Nov 25 16:01:32 crc kubenswrapper[4731]: I1125 16:01:32.831777 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_600ccfb1-2789-4b20-8012-9b6bdbf836ff/mysql-bootstrap/0.log" Nov 25 16:01:32 crc kubenswrapper[4731]: I1125 16:01:32.989915 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6c32b705-7078-46a9-9858-16932a2b2675/mysql-bootstrap/0.log" Nov 25 16:01:33 crc kubenswrapper[4731]: I1125 16:01:33.239018 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6c32b705-7078-46a9-9858-16932a2b2675/mysql-bootstrap/0.log" Nov 25 16:01:33 crc kubenswrapper[4731]: I1125 16:01:33.283376 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6c32b705-7078-46a9-9858-16932a2b2675/galera/0.log" Nov 25 16:01:33 crc kubenswrapper[4731]: I1125 16:01:33.468286 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_37c1e8ac-9952-41e7-9572-b2112cbd8d28/openstackclient/0.log" Nov 25 16:01:33 crc kubenswrapper[4731]: I1125 16:01:33.505330 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-crvzg_5881193a-868d-4633-ae6d-0428aa755057/ovn-controller/0.log" Nov 25 16:01:33 crc kubenswrapper[4731]: I1125 16:01:33.533765 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_209fb0e3-5c2d-4e1d-b735-432bc80aa947/nova-metadata-metadata/0.log" Nov 25 16:01:33 crc kubenswrapper[4731]: I1125 16:01:33.774541 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hvl9d_e1f1fddc-9c04-4e31-b6c4-528d39e050a8/openstack-network-exporter/0.log" Nov 25 16:01:34 crc kubenswrapper[4731]: I1125 16:01:34.025964 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w2gw5_0b83f089-05d3-4362-b02c-ba4293807c86/ovsdb-server-init/0.log" Nov 25 16:01:34 crc kubenswrapper[4731]: I1125 16:01:34.280679 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w2gw5_0b83f089-05d3-4362-b02c-ba4293807c86/ovs-vswitchd/0.log" Nov 25 16:01:34 crc kubenswrapper[4731]: I1125 16:01:34.284013 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w2gw5_0b83f089-05d3-4362-b02c-ba4293807c86/ovsdb-server/0.log" Nov 25 16:01:34 crc kubenswrapper[4731]: I1125 16:01:34.294871 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w2gw5_0b83f089-05d3-4362-b02c-ba4293807c86/ovsdb-server-init/0.log" Nov 25 16:01:34 crc kubenswrapper[4731]: I1125 16:01:34.581909 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_dc7742e1-0cb7-4119-be98-f544587f3687/openstack-network-exporter/0.log" Nov 25 16:01:34 crc kubenswrapper[4731]: I1125 16:01:34.589845 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-lks4s_0b01dd33-8188-45e4-a53a-c01ea9df00f7/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:01:34 crc kubenswrapper[4731]: I1125 16:01:34.641752 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_dc7742e1-0cb7-4119-be98-f544587f3687/ovn-northd/0.log" Nov 25 16:01:34 crc kubenswrapper[4731]: I1125 16:01:34.780502 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fecb6adf-6993-4a45-a85a-4fa15a484be1/openstack-network-exporter/0.log" Nov 25 16:01:34 crc kubenswrapper[4731]: I1125 16:01:34.812392 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fecb6adf-6993-4a45-a85a-4fa15a484be1/ovsdbserver-nb/0.log" Nov 25 16:01:35 crc kubenswrapper[4731]: I1125 16:01:35.404762 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:01:35 crc kubenswrapper[4731]: I1125 16:01:35.460758 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:01:35 crc kubenswrapper[4731]: I1125 16:01:35.646913 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rb6lz"] Nov 25 16:01:35 crc kubenswrapper[4731]: I1125 16:01:35.778389 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1b455e72-bfa2-4a6f-9b54-8424af4e4c53/ovsdbserver-sb/0.log" Nov 25 16:01:35 crc kubenswrapper[4731]: I1125 16:01:35.832107 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1b455e72-bfa2-4a6f-9b54-8424af4e4c53/openstack-network-exporter/0.log" Nov 25 16:01:36 crc kubenswrapper[4731]: I1125 16:01:36.043176 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-ccd9b5c94-l8cl7_3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b/placement-api/0.log" Nov 25 16:01:36 crc kubenswrapper[4731]: I1125 16:01:36.150872 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-ccd9b5c94-l8cl7_3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b/placement-log/0.log" Nov 25 16:01:36 crc kubenswrapper[4731]: I1125 16:01:36.175157 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_053ea4bb-51e8-4900-b8ad-fae14168a43f/init-config-reloader/0.log" Nov 25 16:01:36 crc kubenswrapper[4731]: I1125 16:01:36.383543 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_053ea4bb-51e8-4900-b8ad-fae14168a43f/config-reloader/0.log" Nov 25 16:01:36 crc kubenswrapper[4731]: I1125 16:01:36.406416 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_053ea4bb-51e8-4900-b8ad-fae14168a43f/prometheus/0.log" Nov 25 16:01:36 crc kubenswrapper[4731]: I1125 16:01:36.433509 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_053ea4bb-51e8-4900-b8ad-fae14168a43f/thanos-sidecar/0.log" Nov 25 16:01:36 crc kubenswrapper[4731]: I1125 16:01:36.476643 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_053ea4bb-51e8-4900-b8ad-fae14168a43f/init-config-reloader/0.log" Nov 25 16:01:36 crc kubenswrapper[4731]: I1125 16:01:36.585974 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9902e27-284b-4015-994b-9b27f3254c5d/setup-container/0.log" Nov 25 16:01:36 crc kubenswrapper[4731]: I1125 16:01:36.860291 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rb6lz" podUID="6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" containerName="registry-server" containerID="cri-o://e7261629d93e63840501c165e61693e740a9d597fc644323362ef44ce4ddfd08" gracePeriod=2 Nov 25 16:01:37 crc kubenswrapper[4731]: I1125 16:01:37.345282 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_abe2a548-008b-4d3d-b0d1-d6e62dc15a4d/setup-container/0.log" Nov 25 16:01:37 crc kubenswrapper[4731]: I1125 16:01:37.395147 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9902e27-284b-4015-994b-9b27f3254c5d/rabbitmq/0.log" Nov 25 16:01:37 crc kubenswrapper[4731]: I1125 16:01:37.414163 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9902e27-284b-4015-994b-9b27f3254c5d/setup-container/0.log" Nov 25 16:01:37 crc kubenswrapper[4731]: I1125 16:01:37.560298 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_abe2a548-008b-4d3d-b0d1-d6e62dc15a4d/setup-container/0.log" Nov 25 16:01:37 crc kubenswrapper[4731]: I1125 16:01:37.682959 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh_4eb1672c-88da-4d49-a0dc-6ddaabc48284/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:01:37 crc kubenswrapper[4731]: I1125 16:01:37.776001 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_abe2a548-008b-4d3d-b0d1-d6e62dc15a4d/rabbitmq/0.log" Nov 25 16:01:37 crc kubenswrapper[4731]: I1125 16:01:37.911015 4731 generic.go:334] "Generic (PLEG): container finished" podID="6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" containerID="e7261629d93e63840501c165e61693e740a9d597fc644323362ef44ce4ddfd08" exitCode=0 Nov 25 16:01:37 crc kubenswrapper[4731]: I1125 16:01:37.911074 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb6lz" event={"ID":"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b","Type":"ContainerDied","Data":"e7261629d93e63840501c165e61693e740a9d597fc644323362ef44ce4ddfd08"} Nov 25 16:01:37 crc kubenswrapper[4731]: I1125 16:01:37.944104 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-h9bbc_4e21c7c5-3531-4470-b2d5-a57c748b4add/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.053216 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq_35e801a7-1cfb-402e-9194-9e25042d0d85/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.102441 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.204958 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-utilities\") pod \"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b\" (UID: \"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b\") " Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.205446 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-catalog-content\") pod \"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b\" (UID: \"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b\") " Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.205510 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnf4n\" (UniqueName: \"kubernetes.io/projected/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-kube-api-access-bnf4n\") pod \"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b\" (UID: \"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b\") " Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.207334 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-utilities" (OuterVolumeSpecName: "utilities") pod "6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" (UID: "6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.216722 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-kube-api-access-bnf4n" (OuterVolumeSpecName: "kube-api-access-bnf4n") pod "6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" (UID: "6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b"). InnerVolumeSpecName "kube-api-access-bnf4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.223373 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-bqnjx_ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.307550 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bnf4n\" (UniqueName: \"kubernetes.io/projected/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-kube-api-access-bnf4n\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.307587 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.357476 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" (UID: "6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.377731 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-sgtgr_b6f2222b-0367-4b65-8481-bccefb346462/ssh-known-hosts-edpm-deployment/0.log" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.409242 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.623072 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85b87f8dc-297hh_5a662040-fd64-4441-86b9-9fc355a75142/proxy-server/0.log" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.705560 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85b87f8dc-297hh_5a662040-fd64-4441-86b9-9fc355a75142/proxy-httpd/0.log" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.767460 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-br7gv_6528b54d-5bbd-4f24-9041-e1dd5fef70c7/swift-ring-rebalance/0.log" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.880431 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/account-auditor/0.log" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.925275 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rb6lz" event={"ID":"6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b","Type":"ContainerDied","Data":"5d4cb2e059c20a667fb54a5e0f775b3697b483c156be44a9fd9cee4a861033a7"} Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.925336 4731 scope.go:117] "RemoveContainer" containerID="e7261629d93e63840501c165e61693e740a9d597fc644323362ef44ce4ddfd08" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.925519 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rb6lz" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.950830 4731 scope.go:117] "RemoveContainer" containerID="c557dd1e9ea27e3b571e61b4532b005fee4947b3a6a6e091150ca904a5f07334" Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.952660 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rb6lz"] Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.965614 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rb6lz"] Nov 25 16:01:38 crc kubenswrapper[4731]: I1125 16:01:38.993962 4731 scope.go:117] "RemoveContainer" containerID="2f269e55b94805405953940099b564a89707f64f334ef9c45a020b397ddedc93" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.057574 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-48smz"] Nov 25 16:01:39 crc kubenswrapper[4731]: E1125 16:01:39.058429 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c504275d-bf6a-4a8a-a3e6-837da3007238" containerName="registry-server" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.058616 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c504275d-bf6a-4a8a-a3e6-837da3007238" containerName="registry-server" Nov 25 16:01:39 crc kubenswrapper[4731]: E1125 16:01:39.058739 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c504275d-bf6a-4a8a-a3e6-837da3007238" containerName="extract-utilities" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.058849 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c504275d-bf6a-4a8a-a3e6-837da3007238" containerName="extract-utilities" Nov 25 16:01:39 crc kubenswrapper[4731]: E1125 16:01:39.058965 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" containerName="extract-utilities" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.059039 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" containerName="extract-utilities" Nov 25 16:01:39 crc kubenswrapper[4731]: E1125 16:01:39.059120 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" containerName="extract-content" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.059219 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" containerName="extract-content" Nov 25 16:01:39 crc kubenswrapper[4731]: E1125 16:01:39.059306 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1605aa1-c529-4c48-ac65-895048226e49" containerName="keystone-cron" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.059393 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1605aa1-c529-4c48-ac65-895048226e49" containerName="keystone-cron" Nov 25 16:01:39 crc kubenswrapper[4731]: E1125 16:01:39.059498 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c504275d-bf6a-4a8a-a3e6-837da3007238" containerName="extract-content" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.059567 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c504275d-bf6a-4a8a-a3e6-837da3007238" containerName="extract-content" Nov 25 16:01:39 crc kubenswrapper[4731]: E1125 16:01:39.059777 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" containerName="registry-server" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.059919 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" containerName="registry-server" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.060274 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="c504275d-bf6a-4a8a-a3e6-837da3007238" containerName="registry-server" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.060379 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" containerName="registry-server" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.060488 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1605aa1-c529-4c48-ac65-895048226e49" containerName="keystone-cron" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.066492 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.090233 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-48smz"] Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.094515 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/account-replicator/0.log" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.118547 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/account-reaper/0.log" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.211798 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/account-server/0.log" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.227484 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfpx4\" (UniqueName: \"kubernetes.io/projected/5f376bca-97b3-4267-9139-366fc69fe99c-kube-api-access-mfpx4\") pod \"certified-operators-48smz\" (UID: \"5f376bca-97b3-4267-9139-366fc69fe99c\") " pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.227544 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f376bca-97b3-4267-9139-366fc69fe99c-catalog-content\") pod \"certified-operators-48smz\" (UID: \"5f376bca-97b3-4267-9139-366fc69fe99c\") " pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.227608 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f376bca-97b3-4267-9139-366fc69fe99c-utilities\") pod \"certified-operators-48smz\" (UID: \"5f376bca-97b3-4267-9139-366fc69fe99c\") " pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.267500 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/container-auditor/0.log" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.329074 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfpx4\" (UniqueName: \"kubernetes.io/projected/5f376bca-97b3-4267-9139-366fc69fe99c-kube-api-access-mfpx4\") pod \"certified-operators-48smz\" (UID: \"5f376bca-97b3-4267-9139-366fc69fe99c\") " pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.329134 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f376bca-97b3-4267-9139-366fc69fe99c-catalog-content\") pod \"certified-operators-48smz\" (UID: \"5f376bca-97b3-4267-9139-366fc69fe99c\") " pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.329198 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f376bca-97b3-4267-9139-366fc69fe99c-utilities\") pod \"certified-operators-48smz\" (UID: \"5f376bca-97b3-4267-9139-366fc69fe99c\") " pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.329794 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f376bca-97b3-4267-9139-366fc69fe99c-utilities\") pod \"certified-operators-48smz\" (UID: \"5f376bca-97b3-4267-9139-366fc69fe99c\") " pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.330077 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f376bca-97b3-4267-9139-366fc69fe99c-catalog-content\") pod \"certified-operators-48smz\" (UID: \"5f376bca-97b3-4267-9139-366fc69fe99c\") " pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.349570 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfpx4\" (UniqueName: \"kubernetes.io/projected/5f376bca-97b3-4267-9139-366fc69fe99c-kube-api-access-mfpx4\") pod \"certified-operators-48smz\" (UID: \"5f376bca-97b3-4267-9139-366fc69fe99c\") " pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.354443 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/container-replicator/0.log" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.399792 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.429060 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/container-server/0.log" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.516723 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/container-updater/0.log" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.742276 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/object-auditor/0.log" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.832438 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/object-replicator/0.log" Nov 25 16:01:39 crc kubenswrapper[4731]: I1125 16:01:39.848884 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/object-expirer/0.log" Nov 25 16:01:40 crc kubenswrapper[4731]: I1125 16:01:40.003195 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-48smz"] Nov 25 16:01:40 crc kubenswrapper[4731]: I1125 16:01:40.019505 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/object-server/0.log" Nov 25 16:01:40 crc kubenswrapper[4731]: I1125 16:01:40.098320 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/object-updater/0.log" Nov 25 16:01:40 crc kubenswrapper[4731]: I1125 16:01:40.221963 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/rsync/0.log" Nov 25 16:01:40 crc kubenswrapper[4731]: I1125 16:01:40.263024 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/swift-recon-cron/0.log" Nov 25 16:01:40 crc kubenswrapper[4731]: I1125 16:01:40.466367 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-722hf_4fbd03b7-1cca-4719-ae8b-e532f0d61c9e/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:01:40 crc kubenswrapper[4731]: I1125 16:01:40.527432 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp_15424794-1bec-4dde-9c0c-7828167a0bd7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:01:40 crc kubenswrapper[4731]: I1125 16:01:40.826700 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b" path="/var/lib/kubelet/pods/6d0ab9a9-d6eb-4e54-a19e-0f7c3d6e806b/volumes" Nov 25 16:01:40 crc kubenswrapper[4731]: I1125 16:01:40.954612 4731 generic.go:334] "Generic (PLEG): container finished" podID="5f376bca-97b3-4267-9139-366fc69fe99c" containerID="df9270dfcb07c0cb042b2bdb589eab3d78c0c70aa809befa7c13436bcaefa9ba" exitCode=0 Nov 25 16:01:40 crc kubenswrapper[4731]: I1125 16:01:40.954662 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48smz" event={"ID":"5f376bca-97b3-4267-9139-366fc69fe99c","Type":"ContainerDied","Data":"df9270dfcb07c0cb042b2bdb589eab3d78c0c70aa809befa7c13436bcaefa9ba"} Nov 25 16:01:40 crc kubenswrapper[4731]: I1125 16:01:40.954694 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48smz" event={"ID":"5f376bca-97b3-4267-9139-366fc69fe99c","Type":"ContainerStarted","Data":"9bf1d484ab5b8f3657e1953dc15d425ff42dbc2d3c1a04e73fbe0bf42da46ce8"} Nov 25 16:01:42 crc kubenswrapper[4731]: I1125 16:01:42.973206 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48smz" event={"ID":"5f376bca-97b3-4267-9139-366fc69fe99c","Type":"ContainerStarted","Data":"7c464636e21acc4d213e6699848c562357371543e3748753e676ea09ff2044e3"} Nov 25 16:01:46 crc kubenswrapper[4731]: I1125 16:01:46.017097 4731 generic.go:334] "Generic (PLEG): container finished" podID="5f376bca-97b3-4267-9139-366fc69fe99c" containerID="7c464636e21acc4d213e6699848c562357371543e3748753e676ea09ff2044e3" exitCode=0 Nov 25 16:01:46 crc kubenswrapper[4731]: I1125 16:01:46.017170 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48smz" event={"ID":"5f376bca-97b3-4267-9139-366fc69fe99c","Type":"ContainerDied","Data":"7c464636e21acc4d213e6699848c562357371543e3748753e676ea09ff2044e3"} Nov 25 16:01:49 crc kubenswrapper[4731]: I1125 16:01:49.065021 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48smz" event={"ID":"5f376bca-97b3-4267-9139-366fc69fe99c","Type":"ContainerStarted","Data":"07f7467ff310f76f22c646bdc7977a1784fa4d4167f0b946076ca06dee2a339f"} Nov 25 16:01:49 crc kubenswrapper[4731]: I1125 16:01:49.087752 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-48smz" podStartSLOduration=2.930921629 podStartE2EDuration="10.087736331s" podCreationTimestamp="2025-11-25 16:01:39 +0000 UTC" firstStartedPulling="2025-11-25 16:01:40.956993966 +0000 UTC m=+4059.704128396" lastFinishedPulling="2025-11-25 16:01:48.113808668 +0000 UTC m=+4066.860943098" observedRunningTime="2025-11-25 16:01:49.084832209 +0000 UTC m=+4067.831966639" watchObservedRunningTime="2025-11-25 16:01:49.087736331 +0000 UTC m=+4067.834870761" Nov 25 16:01:49 crc kubenswrapper[4731]: I1125 16:01:49.400018 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:01:49 crc kubenswrapper[4731]: I1125 16:01:49.400071 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:01:49 crc kubenswrapper[4731]: I1125 16:01:49.555452 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_29e42be3-60a3-434a-92b2-6692d4316991/memcached/0.log" Nov 25 16:01:50 crc kubenswrapper[4731]: I1125 16:01:50.458274 4731 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-48smz" podUID="5f376bca-97b3-4267-9139-366fc69fe99c" containerName="registry-server" probeResult="failure" output=< Nov 25 16:01:50 crc kubenswrapper[4731]: timeout: failed to connect service ":50051" within 1s Nov 25 16:01:50 crc kubenswrapper[4731]: > Nov 25 16:01:59 crc kubenswrapper[4731]: I1125 16:01:59.448761 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:01:59 crc kubenswrapper[4731]: I1125 16:01:59.509304 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:01:59 crc kubenswrapper[4731]: I1125 16:01:59.692163 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-48smz"] Nov 25 16:02:01 crc kubenswrapper[4731]: I1125 16:02:01.180279 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-48smz" podUID="5f376bca-97b3-4267-9139-366fc69fe99c" containerName="registry-server" containerID="cri-o://07f7467ff310f76f22c646bdc7977a1784fa4d4167f0b946076ca06dee2a339f" gracePeriod=2 Nov 25 16:02:02 crc kubenswrapper[4731]: I1125 16:02:02.200529 4731 generic.go:334] "Generic (PLEG): container finished" podID="5f376bca-97b3-4267-9139-366fc69fe99c" containerID="07f7467ff310f76f22c646bdc7977a1784fa4d4167f0b946076ca06dee2a339f" exitCode=0 Nov 25 16:02:02 crc kubenswrapper[4731]: I1125 16:02:02.200888 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48smz" event={"ID":"5f376bca-97b3-4267-9139-366fc69fe99c","Type":"ContainerDied","Data":"07f7467ff310f76f22c646bdc7977a1784fa4d4167f0b946076ca06dee2a339f"} Nov 25 16:02:02 crc kubenswrapper[4731]: I1125 16:02:02.323543 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:02:02 crc kubenswrapper[4731]: I1125 16:02:02.415509 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f376bca-97b3-4267-9139-366fc69fe99c-catalog-content\") pod \"5f376bca-97b3-4267-9139-366fc69fe99c\" (UID: \"5f376bca-97b3-4267-9139-366fc69fe99c\") " Nov 25 16:02:02 crc kubenswrapper[4731]: I1125 16:02:02.415598 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfpx4\" (UniqueName: \"kubernetes.io/projected/5f376bca-97b3-4267-9139-366fc69fe99c-kube-api-access-mfpx4\") pod \"5f376bca-97b3-4267-9139-366fc69fe99c\" (UID: \"5f376bca-97b3-4267-9139-366fc69fe99c\") " Nov 25 16:02:02 crc kubenswrapper[4731]: I1125 16:02:02.415652 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f376bca-97b3-4267-9139-366fc69fe99c-utilities\") pod \"5f376bca-97b3-4267-9139-366fc69fe99c\" (UID: \"5f376bca-97b3-4267-9139-366fc69fe99c\") " Nov 25 16:02:02 crc kubenswrapper[4731]: I1125 16:02:02.416910 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f376bca-97b3-4267-9139-366fc69fe99c-utilities" (OuterVolumeSpecName: "utilities") pod "5f376bca-97b3-4267-9139-366fc69fe99c" (UID: "5f376bca-97b3-4267-9139-366fc69fe99c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:02:02 crc kubenswrapper[4731]: I1125 16:02:02.421295 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f376bca-97b3-4267-9139-366fc69fe99c-kube-api-access-mfpx4" (OuterVolumeSpecName: "kube-api-access-mfpx4") pod "5f376bca-97b3-4267-9139-366fc69fe99c" (UID: "5f376bca-97b3-4267-9139-366fc69fe99c"). InnerVolumeSpecName "kube-api-access-mfpx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:02:02 crc kubenswrapper[4731]: I1125 16:02:02.466729 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f376bca-97b3-4267-9139-366fc69fe99c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f376bca-97b3-4267-9139-366fc69fe99c" (UID: "5f376bca-97b3-4267-9139-366fc69fe99c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:02:02 crc kubenswrapper[4731]: I1125 16:02:02.518707 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f376bca-97b3-4267-9139-366fc69fe99c-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 16:02:02 crc kubenswrapper[4731]: I1125 16:02:02.518749 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfpx4\" (UniqueName: \"kubernetes.io/projected/5f376bca-97b3-4267-9139-366fc69fe99c-kube-api-access-mfpx4\") on node \"crc\" DevicePath \"\"" Nov 25 16:02:02 crc kubenswrapper[4731]: I1125 16:02:02.518761 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f376bca-97b3-4267-9139-366fc69fe99c-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 16:02:03 crc kubenswrapper[4731]: I1125 16:02:03.212443 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48smz" event={"ID":"5f376bca-97b3-4267-9139-366fc69fe99c","Type":"ContainerDied","Data":"9bf1d484ab5b8f3657e1953dc15d425ff42dbc2d3c1a04e73fbe0bf42da46ce8"} Nov 25 16:02:03 crc kubenswrapper[4731]: I1125 16:02:03.212532 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48smz" Nov 25 16:02:03 crc kubenswrapper[4731]: I1125 16:02:03.212784 4731 scope.go:117] "RemoveContainer" containerID="07f7467ff310f76f22c646bdc7977a1784fa4d4167f0b946076ca06dee2a339f" Nov 25 16:02:03 crc kubenswrapper[4731]: I1125 16:02:03.236977 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-48smz"] Nov 25 16:02:03 crc kubenswrapper[4731]: I1125 16:02:03.242077 4731 scope.go:117] "RemoveContainer" containerID="7c464636e21acc4d213e6699848c562357371543e3748753e676ea09ff2044e3" Nov 25 16:02:03 crc kubenswrapper[4731]: I1125 16:02:03.247354 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-48smz"] Nov 25 16:02:03 crc kubenswrapper[4731]: I1125 16:02:03.262317 4731 scope.go:117] "RemoveContainer" containerID="df9270dfcb07c0cb042b2bdb589eab3d78c0c70aa809befa7c13436bcaefa9ba" Nov 25 16:02:04 crc kubenswrapper[4731]: I1125 16:02:04.826524 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f376bca-97b3-4267-9139-366fc69fe99c" path="/var/lib/kubelet/pods/5f376bca-97b3-4267-9139-366fc69fe99c/volumes" Nov 25 16:02:07 crc kubenswrapper[4731]: I1125 16:02:07.143575 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85_3958d450-0a9a-4191-b0e0-5989df2da551/util/0.log" Nov 25 16:02:07 crc kubenswrapper[4731]: I1125 16:02:07.540918 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85_3958d450-0a9a-4191-b0e0-5989df2da551/util/0.log" Nov 25 16:02:07 crc kubenswrapper[4731]: I1125 16:02:07.542156 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85_3958d450-0a9a-4191-b0e0-5989df2da551/pull/0.log" Nov 25 16:02:07 crc kubenswrapper[4731]: I1125 16:02:07.608059 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85_3958d450-0a9a-4191-b0e0-5989df2da551/pull/0.log" Nov 25 16:02:07 crc kubenswrapper[4731]: I1125 16:02:07.745796 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85_3958d450-0a9a-4191-b0e0-5989df2da551/util/0.log" Nov 25 16:02:07 crc kubenswrapper[4731]: I1125 16:02:07.779956 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85_3958d450-0a9a-4191-b0e0-5989df2da551/extract/0.log" Nov 25 16:02:07 crc kubenswrapper[4731]: I1125 16:02:07.807900 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85_3958d450-0a9a-4191-b0e0-5989df2da551/pull/0.log" Nov 25 16:02:07 crc kubenswrapper[4731]: I1125 16:02:07.920213 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-vldj4_a706f3bc-21c7-416e-b7cb-f0886804bc08/kube-rbac-proxy/0.log" Nov 25 16:02:08 crc kubenswrapper[4731]: I1125 16:02:08.031335 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-vldj4_a706f3bc-21c7-416e-b7cb-f0886804bc08/manager/0.log" Nov 25 16:02:08 crc kubenswrapper[4731]: I1125 16:02:08.060325 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-8w79f_92f78ba4-d5a9-43f8-bb6d-7d226c301ca4/kube-rbac-proxy/0.log" Nov 25 16:02:08 crc kubenswrapper[4731]: I1125 16:02:08.178969 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-8w79f_92f78ba4-d5a9-43f8-bb6d-7d226c301ca4/manager/0.log" Nov 25 16:02:08 crc kubenswrapper[4731]: I1125 16:02:08.270712 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-hxwn7_f5180254-3122-4550-b259-f870f4575073/kube-rbac-proxy/0.log" Nov 25 16:02:08 crc kubenswrapper[4731]: I1125 16:02:08.318838 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-hxwn7_f5180254-3122-4550-b259-f870f4575073/manager/0.log" Nov 25 16:02:08 crc kubenswrapper[4731]: I1125 16:02:08.437595 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-r52fx_8212bdfe-bb4a-4865-9ef9-ba58232188f7/kube-rbac-proxy/0.log" Nov 25 16:02:08 crc kubenswrapper[4731]: I1125 16:02:08.578586 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-r52fx_8212bdfe-bb4a-4865-9ef9-ba58232188f7/manager/0.log" Nov 25 16:02:08 crc kubenswrapper[4731]: I1125 16:02:08.637779 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-kxfnt_ab316c81-bc21-47b6-8dbd-bd6c26cc2769/kube-rbac-proxy/0.log" Nov 25 16:02:08 crc kubenswrapper[4731]: I1125 16:02:08.749458 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-kxfnt_ab316c81-bc21-47b6-8dbd-bd6c26cc2769/manager/0.log" Nov 25 16:02:08 crc kubenswrapper[4731]: I1125 16:02:08.789702 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-kn45z_e4f0ff64-5199-4a19-8657-08ff58d915da/kube-rbac-proxy/0.log" Nov 25 16:02:08 crc kubenswrapper[4731]: I1125 16:02:08.848617 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-kn45z_e4f0ff64-5199-4a19-8657-08ff58d915da/manager/0.log" Nov 25 16:02:09 crc kubenswrapper[4731]: I1125 16:02:09.663008 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-2sf58_d195c5e8-4c33-44ad-ba19-deadfc6e21ef/kube-rbac-proxy/0.log" Nov 25 16:02:09 crc kubenswrapper[4731]: I1125 16:02:09.870866 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-2sf58_d195c5e8-4c33-44ad-ba19-deadfc6e21ef/manager/0.log" Nov 25 16:02:09 crc kubenswrapper[4731]: I1125 16:02:09.883307 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-sh9ng_fa757060-f39f-4482-8fcb-41ae41ef9cb9/kube-rbac-proxy/0.log" Nov 25 16:02:09 crc kubenswrapper[4731]: I1125 16:02:09.931161 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-sh9ng_fa757060-f39f-4482-8fcb-41ae41ef9cb9/manager/0.log" Nov 25 16:02:10 crc kubenswrapper[4731]: I1125 16:02:10.085267 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-h4whj_83e6ff4c-b0f0-4d61-8923-31c0d3f1398f/kube-rbac-proxy/0.log" Nov 25 16:02:10 crc kubenswrapper[4731]: I1125 16:02:10.162313 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-h4whj_83e6ff4c-b0f0-4d61-8923-31c0d3f1398f/manager/0.log" Nov 25 16:02:10 crc kubenswrapper[4731]: I1125 16:02:10.267791 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-2npjw_1af6726c-a2ca-43a1-b932-cddcd28457d1/kube-rbac-proxy/0.log" Nov 25 16:02:10 crc kubenswrapper[4731]: I1125 16:02:10.287440 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-2npjw_1af6726c-a2ca-43a1-b932-cddcd28457d1/manager/0.log" Nov 25 16:02:10 crc kubenswrapper[4731]: I1125 16:02:10.380296 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-7kltb_01518faf-8637-4852-b877-b5c8ca2c7d91/kube-rbac-proxy/0.log" Nov 25 16:02:10 crc kubenswrapper[4731]: I1125 16:02:10.437653 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-7kltb_01518faf-8637-4852-b877-b5c8ca2c7d91/manager/0.log" Nov 25 16:02:10 crc kubenswrapper[4731]: I1125 16:02:10.574032 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-j2wt5_f5ea628e-44a7-499c-a433-14d18a9bb804/kube-rbac-proxy/0.log" Nov 25 16:02:10 crc kubenswrapper[4731]: I1125 16:02:10.646637 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-j2wt5_f5ea628e-44a7-499c-a433-14d18a9bb804/manager/0.log" Nov 25 16:02:10 crc kubenswrapper[4731]: I1125 16:02:10.701794 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-qjrbz_39362f78-55ff-4380-80b3-c46e2d004749/kube-rbac-proxy/0.log" Nov 25 16:02:10 crc kubenswrapper[4731]: I1125 16:02:10.818372 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-qjrbz_39362f78-55ff-4380-80b3-c46e2d004749/manager/0.log" Nov 25 16:02:10 crc kubenswrapper[4731]: I1125 16:02:10.894300 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-bdwql_1eedb0ec-4f2e-43da-8e15-aae0436e942e/manager/0.log" Nov 25 16:02:10 crc kubenswrapper[4731]: I1125 16:02:10.928987 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-bdwql_1eedb0ec-4f2e-43da-8e15-aae0436e942e/kube-rbac-proxy/0.log" Nov 25 16:02:11 crc kubenswrapper[4731]: I1125 16:02:11.059227 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-kctp7_1bdef4b3-96db-42a5-aefd-2641cced3d1e/kube-rbac-proxy/0.log" Nov 25 16:02:11 crc kubenswrapper[4731]: I1125 16:02:11.068846 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-kctp7_1bdef4b3-96db-42a5-aefd-2641cced3d1e/manager/0.log" Nov 25 16:02:11 crc kubenswrapper[4731]: I1125 16:02:11.441985 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6k7ns_f573d017-983d-413d-adea-34617a278460/registry-server/0.log" Nov 25 16:02:11 crc kubenswrapper[4731]: I1125 16:02:11.509211 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-f546cbb6c-wz8k8_c2b163b6-05e5-4c6e-95f9-2748e85eb037/operator/0.log" Nov 25 16:02:11 crc kubenswrapper[4731]: I1125 16:02:11.594420 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-wmjzm_77bbfdb5-d5b4-4512-ac18-e3d787eea6f2/kube-rbac-proxy/0.log" Nov 25 16:02:11 crc kubenswrapper[4731]: I1125 16:02:11.698249 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-wmjzm_77bbfdb5-d5b4-4512-ac18-e3d787eea6f2/manager/0.log" Nov 25 16:02:11 crc kubenswrapper[4731]: I1125 16:02:11.762609 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-8m7t9_61221a67-95aa-4e96-a41c-4d9343305157/kube-rbac-proxy/0.log" Nov 25 16:02:11 crc kubenswrapper[4731]: I1125 16:02:11.857976 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-8m7t9_61221a67-95aa-4e96-a41c-4d9343305157/manager/0.log" Nov 25 16:02:11 crc kubenswrapper[4731]: I1125 16:02:11.975807 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-6tddn_2aeac7ab-78a9-49f2-9640-0c91b73bbe57/operator/0.log" Nov 25 16:02:12 crc kubenswrapper[4731]: I1125 16:02:12.128194 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-ft6fb_bc21926a-c3cf-49fa-addf-01d3156c1b5f/kube-rbac-proxy/0.log" Nov 25 16:02:12 crc kubenswrapper[4731]: I1125 16:02:12.258459 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-ft6fb_bc21926a-c3cf-49fa-addf-01d3156c1b5f/manager/0.log" Nov 25 16:02:12 crc kubenswrapper[4731]: I1125 16:02:12.319144 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-75cd8d4b6c-kkqgj_089e8703-f71e-4f81-91af-19224ba7b1b4/kube-rbac-proxy/0.log" Nov 25 16:02:12 crc kubenswrapper[4731]: I1125 16:02:12.490326 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-9w7kw_54f47a6e-c9be-49a3-89af-43ecbd6b7b93/kube-rbac-proxy/0.log" Nov 25 16:02:12 crc kubenswrapper[4731]: I1125 16:02:12.538661 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-9w7kw_54f47a6e-c9be-49a3-89af-43ecbd6b7b93/manager/0.log" Nov 25 16:02:12 crc kubenswrapper[4731]: I1125 16:02:12.682753 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-sk2wl_158dbcc7-a9af-464b-9886-4f5f6f4bbf71/kube-rbac-proxy/0.log" Nov 25 16:02:12 crc kubenswrapper[4731]: I1125 16:02:12.715113 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-75cd8d4b6c-kkqgj_089e8703-f71e-4f81-91af-19224ba7b1b4/manager/0.log" Nov 25 16:02:12 crc kubenswrapper[4731]: I1125 16:02:12.798794 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-sk2wl_158dbcc7-a9af-464b-9886-4f5f6f4bbf71/manager/0.log" Nov 25 16:02:12 crc kubenswrapper[4731]: I1125 16:02:12.870660 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-77d84b6b6b-mdwdb_7d6095a8-a478-472d-b84b-c99877ddcdf1/manager/0.log" Nov 25 16:02:30 crc kubenswrapper[4731]: I1125 16:02:30.942787 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vpcm5_740209e9-0d7f-4da0-b68f-2ef5628039df/kube-rbac-proxy/0.log" Nov 25 16:02:30 crc kubenswrapper[4731]: I1125 16:02:30.949521 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qpv96_013533b2-fe47-4b12-9ecf-98705944a1eb/control-plane-machine-set-operator/0.log" Nov 25 16:02:31 crc kubenswrapper[4731]: I1125 16:02:31.096567 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vpcm5_740209e9-0d7f-4da0-b68f-2ef5628039df/machine-api-operator/0.log" Nov 25 16:02:42 crc kubenswrapper[4731]: I1125 16:02:42.579768 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-7bwk9_4d05d1b3-fbd6-4efc-83a7-b446881f41a1/cert-manager-controller/0.log" Nov 25 16:02:42 crc kubenswrapper[4731]: I1125 16:02:42.681472 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-z8qcw_e6f08a04-49da-431f-8ad5-2669b0395a83/cert-manager-cainjector/0.log" Nov 25 16:02:42 crc kubenswrapper[4731]: I1125 16:02:42.766629 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-s2z26_b8915b48-72ed-4372-bfbf-7f933af258e0/cert-manager-webhook/0.log" Nov 25 16:02:50 crc kubenswrapper[4731]: I1125 16:02:50.040053 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 16:02:50 crc kubenswrapper[4731]: I1125 16:02:50.041022 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 16:02:55 crc kubenswrapper[4731]: I1125 16:02:55.158100 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-p7zbr_6f5cb68e-43ff-4a3a-9917-b6b8349ddb03/nmstate-console-plugin/0.log" Nov 25 16:02:55 crc kubenswrapper[4731]: I1125 16:02:55.364220 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-58n5h_39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9/nmstate-handler/0.log" Nov 25 16:02:55 crc kubenswrapper[4731]: I1125 16:02:55.427654 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-qjzpd_60a0d26a-381d-43f4-9cd1-e9172996b0e2/kube-rbac-proxy/0.log" Nov 25 16:02:55 crc kubenswrapper[4731]: I1125 16:02:55.454256 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-qjzpd_60a0d26a-381d-43f4-9cd1-e9172996b0e2/nmstate-metrics/0.log" Nov 25 16:02:55 crc kubenswrapper[4731]: I1125 16:02:55.614632 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-zbjzj_5a45a5cc-bdc6-4daa-b2fd-509237906c01/nmstate-operator/0.log" Nov 25 16:02:55 crc kubenswrapper[4731]: I1125 16:02:55.654853 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-rpjkc_c271b7db-3cd3-4045-a954-497f15d9f89d/nmstate-webhook/0.log" Nov 25 16:03:11 crc kubenswrapper[4731]: I1125 16:03:11.538007 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5kfg7_febb59e9-b008-465d-890a-6a6a342cf56b/kube-rbac-proxy/0.log" Nov 25 16:03:11 crc kubenswrapper[4731]: I1125 16:03:11.625230 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5kfg7_febb59e9-b008-465d-890a-6a6a342cf56b/controller/0.log" Nov 25 16:03:11 crc kubenswrapper[4731]: I1125 16:03:11.735106 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-frr-files/0.log" Nov 25 16:03:12 crc kubenswrapper[4731]: I1125 16:03:12.529631 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-reloader/0.log" Nov 25 16:03:12 crc kubenswrapper[4731]: I1125 16:03:12.538132 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-metrics/0.log" Nov 25 16:03:12 crc kubenswrapper[4731]: I1125 16:03:12.603432 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-frr-files/0.log" Nov 25 16:03:12 crc kubenswrapper[4731]: I1125 16:03:12.627973 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-reloader/0.log" Nov 25 16:03:12 crc kubenswrapper[4731]: I1125 16:03:12.773167 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-frr-files/0.log" Nov 25 16:03:12 crc kubenswrapper[4731]: I1125 16:03:12.779763 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-reloader/0.log" Nov 25 16:03:12 crc kubenswrapper[4731]: I1125 16:03:12.809796 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-metrics/0.log" Nov 25 16:03:12 crc kubenswrapper[4731]: I1125 16:03:12.840485 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-metrics/0.log" Nov 25 16:03:13 crc kubenswrapper[4731]: I1125 16:03:13.038585 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-frr-files/0.log" Nov 25 16:03:13 crc kubenswrapper[4731]: I1125 16:03:13.061292 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/controller/0.log" Nov 25 16:03:13 crc kubenswrapper[4731]: I1125 16:03:13.090174 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-metrics/0.log" Nov 25 16:03:13 crc kubenswrapper[4731]: I1125 16:03:13.109509 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-reloader/0.log" Nov 25 16:03:13 crc kubenswrapper[4731]: I1125 16:03:13.242035 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/frr-metrics/0.log" Nov 25 16:03:13 crc kubenswrapper[4731]: I1125 16:03:13.372749 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/kube-rbac-proxy/0.log" Nov 25 16:03:13 crc kubenswrapper[4731]: I1125 16:03:13.414106 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/kube-rbac-proxy-frr/0.log" Nov 25 16:03:13 crc kubenswrapper[4731]: I1125 16:03:13.543233 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/reloader/0.log" Nov 25 16:03:13 crc kubenswrapper[4731]: I1125 16:03:13.661059 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-qxfjt_fe90ff60-e166-4e73-a995-dfd82a45895c/frr-k8s-webhook-server/0.log" Nov 25 16:03:13 crc kubenswrapper[4731]: I1125 16:03:13.866641 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6bc94d877-jvccm_b6957d61-fdfc-49b1-8721-a14991fc51c5/manager/0.log" Nov 25 16:03:13 crc kubenswrapper[4731]: I1125 16:03:13.984635 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-796ff6997-qwkk4_af60cb8b-34de-4af2-a647-84ed628f77f0/webhook-server/0.log" Nov 25 16:03:14 crc kubenswrapper[4731]: I1125 16:03:14.125764 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2jnfr_5df2adbb-3909-4e10-8f11-5953fb961766/kube-rbac-proxy/0.log" Nov 25 16:03:15 crc kubenswrapper[4731]: I1125 16:03:15.016944 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/frr/0.log" Nov 25 16:03:15 crc kubenswrapper[4731]: I1125 16:03:15.022437 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2jnfr_5df2adbb-3909-4e10-8f11-5953fb961766/speaker/0.log" Nov 25 16:03:20 crc kubenswrapper[4731]: I1125 16:03:20.039550 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 16:03:20 crc kubenswrapper[4731]: I1125 16:03:20.040180 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 16:03:27 crc kubenswrapper[4731]: I1125 16:03:27.622925 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh_b6d7e2d5-3efb-47ec-9cb5-685467f22976/util/0.log" Nov 25 16:03:27 crc kubenswrapper[4731]: I1125 16:03:27.755209 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh_b6d7e2d5-3efb-47ec-9cb5-685467f22976/util/0.log" Nov 25 16:03:27 crc kubenswrapper[4731]: I1125 16:03:27.796462 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh_b6d7e2d5-3efb-47ec-9cb5-685467f22976/pull/0.log" Nov 25 16:03:27 crc kubenswrapper[4731]: I1125 16:03:27.819351 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh_b6d7e2d5-3efb-47ec-9cb5-685467f22976/pull/0.log" Nov 25 16:03:28 crc kubenswrapper[4731]: I1125 16:03:28.010211 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh_b6d7e2d5-3efb-47ec-9cb5-685467f22976/util/0.log" Nov 25 16:03:28 crc kubenswrapper[4731]: I1125 16:03:28.013187 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh_b6d7e2d5-3efb-47ec-9cb5-685467f22976/pull/0.log" Nov 25 16:03:28 crc kubenswrapper[4731]: I1125 16:03:28.015969 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh_b6d7e2d5-3efb-47ec-9cb5-685467f22976/extract/0.log" Nov 25 16:03:28 crc kubenswrapper[4731]: I1125 16:03:28.181100 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj_26eed3e4-1783-4718-b1a2-39ba37a78a12/util/0.log" Nov 25 16:03:28 crc kubenswrapper[4731]: I1125 16:03:28.330835 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj_26eed3e4-1783-4718-b1a2-39ba37a78a12/pull/0.log" Nov 25 16:03:28 crc kubenswrapper[4731]: I1125 16:03:28.331098 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj_26eed3e4-1783-4718-b1a2-39ba37a78a12/util/0.log" Nov 25 16:03:28 crc kubenswrapper[4731]: I1125 16:03:28.376171 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj_26eed3e4-1783-4718-b1a2-39ba37a78a12/pull/0.log" Nov 25 16:03:28 crc kubenswrapper[4731]: I1125 16:03:28.599500 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj_26eed3e4-1783-4718-b1a2-39ba37a78a12/util/0.log" Nov 25 16:03:28 crc kubenswrapper[4731]: I1125 16:03:28.613647 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj_26eed3e4-1783-4718-b1a2-39ba37a78a12/pull/0.log" Nov 25 16:03:28 crc kubenswrapper[4731]: I1125 16:03:28.637025 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj_26eed3e4-1783-4718-b1a2-39ba37a78a12/extract/0.log" Nov 25 16:03:28 crc kubenswrapper[4731]: I1125 16:03:28.814986 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b5pqt_87a958bc-c1ba-4b0b-903d-b21a84408594/extract-utilities/0.log" Nov 25 16:03:28 crc kubenswrapper[4731]: I1125 16:03:28.957361 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b5pqt_87a958bc-c1ba-4b0b-903d-b21a84408594/extract-content/0.log" Nov 25 16:03:28 crc kubenswrapper[4731]: I1125 16:03:28.978051 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b5pqt_87a958bc-c1ba-4b0b-903d-b21a84408594/extract-content/0.log" Nov 25 16:03:28 crc kubenswrapper[4731]: I1125 16:03:28.978811 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b5pqt_87a958bc-c1ba-4b0b-903d-b21a84408594/extract-utilities/0.log" Nov 25 16:03:29 crc kubenswrapper[4731]: I1125 16:03:29.148485 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b5pqt_87a958bc-c1ba-4b0b-903d-b21a84408594/extract-utilities/0.log" Nov 25 16:03:29 crc kubenswrapper[4731]: I1125 16:03:29.187087 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b5pqt_87a958bc-c1ba-4b0b-903d-b21a84408594/extract-content/0.log" Nov 25 16:03:29 crc kubenswrapper[4731]: I1125 16:03:29.421919 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q84gz_01a573d4-5c1e-4ea1-a212-96afe26a6538/extract-utilities/0.log" Nov 25 16:03:29 crc kubenswrapper[4731]: I1125 16:03:29.524331 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q84gz_01a573d4-5c1e-4ea1-a212-96afe26a6538/extract-content/0.log" Nov 25 16:03:29 crc kubenswrapper[4731]: I1125 16:03:29.549957 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q84gz_01a573d4-5c1e-4ea1-a212-96afe26a6538/extract-utilities/0.log" Nov 25 16:03:29 crc kubenswrapper[4731]: I1125 16:03:29.652021 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q84gz_01a573d4-5c1e-4ea1-a212-96afe26a6538/extract-content/0.log" Nov 25 16:03:29 crc kubenswrapper[4731]: I1125 16:03:29.816291 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q84gz_01a573d4-5c1e-4ea1-a212-96afe26a6538/extract-utilities/0.log" Nov 25 16:03:29 crc kubenswrapper[4731]: I1125 16:03:29.861608 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q84gz_01a573d4-5c1e-4ea1-a212-96afe26a6538/extract-content/0.log" Nov 25 16:03:30 crc kubenswrapper[4731]: I1125 16:03:30.103856 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l_97357c05-65a9-4732-b01e-651eb43fe514/util/0.log" Nov 25 16:03:30 crc kubenswrapper[4731]: I1125 16:03:30.286873 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l_97357c05-65a9-4732-b01e-651eb43fe514/util/0.log" Nov 25 16:03:30 crc kubenswrapper[4731]: I1125 16:03:30.466981 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l_97357c05-65a9-4732-b01e-651eb43fe514/pull/0.log" Nov 25 16:03:30 crc kubenswrapper[4731]: I1125 16:03:30.963070 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l_97357c05-65a9-4732-b01e-651eb43fe514/pull/0.log" Nov 25 16:03:30 crc kubenswrapper[4731]: I1125 16:03:30.993205 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q84gz_01a573d4-5c1e-4ea1-a212-96afe26a6538/registry-server/0.log" Nov 25 16:03:31 crc kubenswrapper[4731]: I1125 16:03:31.080654 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b5pqt_87a958bc-c1ba-4b0b-903d-b21a84408594/registry-server/0.log" Nov 25 16:03:31 crc kubenswrapper[4731]: I1125 16:03:31.220529 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l_97357c05-65a9-4732-b01e-651eb43fe514/util/0.log" Nov 25 16:03:31 crc kubenswrapper[4731]: I1125 16:03:31.221597 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l_97357c05-65a9-4732-b01e-651eb43fe514/pull/0.log" Nov 25 16:03:31 crc kubenswrapper[4731]: I1125 16:03:31.290784 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l_97357c05-65a9-4732-b01e-651eb43fe514/extract/0.log" Nov 25 16:03:31 crc kubenswrapper[4731]: I1125 16:03:31.407661 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vk7l9_2807ad74-1d0f-435b-bfb1-3b5eb4dca20d/marketplace-operator/0.log" Nov 25 16:03:31 crc kubenswrapper[4731]: I1125 16:03:31.453137 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5nlwp_bde46057-80c9-49a3-9bab-18d8fea09dc9/extract-utilities/0.log" Nov 25 16:03:31 crc kubenswrapper[4731]: I1125 16:03:31.654366 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5nlwp_bde46057-80c9-49a3-9bab-18d8fea09dc9/extract-utilities/0.log" Nov 25 16:03:31 crc kubenswrapper[4731]: I1125 16:03:31.661743 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5nlwp_bde46057-80c9-49a3-9bab-18d8fea09dc9/extract-content/0.log" Nov 25 16:03:31 crc kubenswrapper[4731]: I1125 16:03:31.664543 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5nlwp_bde46057-80c9-49a3-9bab-18d8fea09dc9/extract-content/0.log" Nov 25 16:03:31 crc kubenswrapper[4731]: I1125 16:03:31.838928 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5nlwp_bde46057-80c9-49a3-9bab-18d8fea09dc9/extract-utilities/0.log" Nov 25 16:03:31 crc kubenswrapper[4731]: I1125 16:03:31.892458 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qzqp4_7ac8f507-158c-4c89-9c07-f64a08a42283/extract-utilities/0.log" Nov 25 16:03:31 crc kubenswrapper[4731]: I1125 16:03:31.930390 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5nlwp_bde46057-80c9-49a3-9bab-18d8fea09dc9/extract-content/0.log" Nov 25 16:03:32 crc kubenswrapper[4731]: I1125 16:03:32.037550 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5nlwp_bde46057-80c9-49a3-9bab-18d8fea09dc9/registry-server/0.log" Nov 25 16:03:32 crc kubenswrapper[4731]: I1125 16:03:32.090750 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qzqp4_7ac8f507-158c-4c89-9c07-f64a08a42283/extract-content/0.log" Nov 25 16:03:32 crc kubenswrapper[4731]: I1125 16:03:32.129530 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qzqp4_7ac8f507-158c-4c89-9c07-f64a08a42283/extract-content/0.log" Nov 25 16:03:32 crc kubenswrapper[4731]: I1125 16:03:32.170686 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qzqp4_7ac8f507-158c-4c89-9c07-f64a08a42283/extract-utilities/0.log" Nov 25 16:03:32 crc kubenswrapper[4731]: I1125 16:03:32.910303 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qzqp4_7ac8f507-158c-4c89-9c07-f64a08a42283/extract-utilities/0.log" Nov 25 16:03:33 crc kubenswrapper[4731]: I1125 16:03:33.030849 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qzqp4_7ac8f507-158c-4c89-9c07-f64a08a42283/extract-content/0.log" Nov 25 16:03:33 crc kubenswrapper[4731]: I1125 16:03:33.640494 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qzqp4_7ac8f507-158c-4c89-9c07-f64a08a42283/registry-server/0.log" Nov 25 16:03:45 crc kubenswrapper[4731]: I1125 16:03:45.050611 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-2vtlt_cbf9673f-5c6b-41d7-8eba-cdc0414a3910/prometheus-operator/0.log" Nov 25 16:03:45 crc kubenswrapper[4731]: I1125 16:03:45.054209 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-69fb756767-6x8xb_236a458e-0795-42ad-8dab-84407618f96d/prometheus-operator-admission-webhook/0.log" Nov 25 16:03:45 crc kubenswrapper[4731]: I1125 16:03:45.275150 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-69fb756767-wdpx7_900460b9-59fd-4456-925b-ed34b683ac09/prometheus-operator-admission-webhook/0.log" Nov 25 16:03:45 crc kubenswrapper[4731]: I1125 16:03:45.331270 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-9s4pj_60ec713e-70e1-4fbf-b011-b6e4bcf87539/operator/0.log" Nov 25 16:03:45 crc kubenswrapper[4731]: I1125 16:03:45.470214 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-qz72z_bfcf5cee-d085-4fb4-8144-9a38143246b4/perses-operator/0.log" Nov 25 16:03:50 crc kubenswrapper[4731]: I1125 16:03:50.039923 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 16:03:50 crc kubenswrapper[4731]: I1125 16:03:50.040631 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 16:03:50 crc kubenswrapper[4731]: I1125 16:03:50.040693 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 16:03:50 crc kubenswrapper[4731]: I1125 16:03:50.041600 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"483fad0053bc2b3bd7b0ec82c454f39b2e2e33b64ce6472c9ab41d7004caede0"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 16:03:50 crc kubenswrapper[4731]: I1125 16:03:50.041654 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://483fad0053bc2b3bd7b0ec82c454f39b2e2e33b64ce6472c9ab41d7004caede0" gracePeriod=600 Nov 25 16:03:50 crc kubenswrapper[4731]: I1125 16:03:50.248371 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="483fad0053bc2b3bd7b0ec82c454f39b2e2e33b64ce6472c9ab41d7004caede0" exitCode=0 Nov 25 16:03:50 crc kubenswrapper[4731]: I1125 16:03:50.248728 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"483fad0053bc2b3bd7b0ec82c454f39b2e2e33b64ce6472c9ab41d7004caede0"} Nov 25 16:03:50 crc kubenswrapper[4731]: I1125 16:03:50.248762 4731 scope.go:117] "RemoveContainer" containerID="ea8b09ff73091cf74b23bd0e396b661acbfa811bb637453452b571d711473d00" Nov 25 16:03:51 crc kubenswrapper[4731]: I1125 16:03:51.261532 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c"} Nov 25 16:05:32 crc kubenswrapper[4731]: I1125 16:05:32.325551 4731 generic.go:334] "Generic (PLEG): container finished" podID="53260cd4-5c2d-4f41-a325-3d1fff95df83" containerID="73b0533e32d42e77a5647e0e0a7c5735221bb997b08abb9f9b7ccef4e1c6e67e" exitCode=0 Nov 25 16:05:32 crc kubenswrapper[4731]: I1125 16:05:32.325613 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-4dfrk/must-gather-98zk6" event={"ID":"53260cd4-5c2d-4f41-a325-3d1fff95df83","Type":"ContainerDied","Data":"73b0533e32d42e77a5647e0e0a7c5735221bb997b08abb9f9b7ccef4e1c6e67e"} Nov 25 16:05:32 crc kubenswrapper[4731]: I1125 16:05:32.326774 4731 scope.go:117] "RemoveContainer" containerID="73b0533e32d42e77a5647e0e0a7c5735221bb997b08abb9f9b7ccef4e1c6e67e" Nov 25 16:05:32 crc kubenswrapper[4731]: I1125 16:05:32.490027 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4dfrk_must-gather-98zk6_53260cd4-5c2d-4f41-a325-3d1fff95df83/gather/0.log" Nov 25 16:05:40 crc kubenswrapper[4731]: I1125 16:05:40.830868 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-4dfrk/must-gather-98zk6"] Nov 25 16:05:40 crc kubenswrapper[4731]: I1125 16:05:40.831552 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-4dfrk/must-gather-98zk6" podUID="53260cd4-5c2d-4f41-a325-3d1fff95df83" containerName="copy" containerID="cri-o://c4e81f8092148cef51eaaca27a24db0bf2fb7a9c32b7150eb1c57a79387b5bd4" gracePeriod=2 Nov 25 16:05:40 crc kubenswrapper[4731]: I1125 16:05:40.842238 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-4dfrk/must-gather-98zk6"] Nov 25 16:05:41 crc kubenswrapper[4731]: I1125 16:05:41.421146 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4dfrk_must-gather-98zk6_53260cd4-5c2d-4f41-a325-3d1fff95df83/copy/0.log" Nov 25 16:05:41 crc kubenswrapper[4731]: I1125 16:05:41.421691 4731 generic.go:334] "Generic (PLEG): container finished" podID="53260cd4-5c2d-4f41-a325-3d1fff95df83" containerID="c4e81f8092148cef51eaaca27a24db0bf2fb7a9c32b7150eb1c57a79387b5bd4" exitCode=143 Nov 25 16:05:41 crc kubenswrapper[4731]: I1125 16:05:41.421732 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03b4a4580344be0abc0de5db3d140c70af8e1dc716991657fd6a243572dba5a6" Nov 25 16:05:41 crc kubenswrapper[4731]: I1125 16:05:41.479518 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-4dfrk_must-gather-98zk6_53260cd4-5c2d-4f41-a325-3d1fff95df83/copy/0.log" Nov 25 16:05:41 crc kubenswrapper[4731]: I1125 16:05:41.479895 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4dfrk/must-gather-98zk6" Nov 25 16:05:41 crc kubenswrapper[4731]: I1125 16:05:41.640379 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5hzt\" (UniqueName: \"kubernetes.io/projected/53260cd4-5c2d-4f41-a325-3d1fff95df83-kube-api-access-w5hzt\") pod \"53260cd4-5c2d-4f41-a325-3d1fff95df83\" (UID: \"53260cd4-5c2d-4f41-a325-3d1fff95df83\") " Nov 25 16:05:41 crc kubenswrapper[4731]: I1125 16:05:41.640789 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/53260cd4-5c2d-4f41-a325-3d1fff95df83-must-gather-output\") pod \"53260cd4-5c2d-4f41-a325-3d1fff95df83\" (UID: \"53260cd4-5c2d-4f41-a325-3d1fff95df83\") " Nov 25 16:05:41 crc kubenswrapper[4731]: I1125 16:05:41.656010 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53260cd4-5c2d-4f41-a325-3d1fff95df83-kube-api-access-w5hzt" (OuterVolumeSpecName: "kube-api-access-w5hzt") pod "53260cd4-5c2d-4f41-a325-3d1fff95df83" (UID: "53260cd4-5c2d-4f41-a325-3d1fff95df83"). InnerVolumeSpecName "kube-api-access-w5hzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:05:41 crc kubenswrapper[4731]: I1125 16:05:41.743974 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5hzt\" (UniqueName: \"kubernetes.io/projected/53260cd4-5c2d-4f41-a325-3d1fff95df83-kube-api-access-w5hzt\") on node \"crc\" DevicePath \"\"" Nov 25 16:05:41 crc kubenswrapper[4731]: I1125 16:05:41.820940 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53260cd4-5c2d-4f41-a325-3d1fff95df83-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "53260cd4-5c2d-4f41-a325-3d1fff95df83" (UID: "53260cd4-5c2d-4f41-a325-3d1fff95df83"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:05:41 crc kubenswrapper[4731]: I1125 16:05:41.846561 4731 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/53260cd4-5c2d-4f41-a325-3d1fff95df83-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 16:05:42 crc kubenswrapper[4731]: I1125 16:05:42.430083 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-4dfrk/must-gather-98zk6" Nov 25 16:05:42 crc kubenswrapper[4731]: I1125 16:05:42.825620 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53260cd4-5c2d-4f41-a325-3d1fff95df83" path="/var/lib/kubelet/pods/53260cd4-5c2d-4f41-a325-3d1fff95df83/volumes" Nov 25 16:05:50 crc kubenswrapper[4731]: I1125 16:05:50.040577 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 16:05:50 crc kubenswrapper[4731]: I1125 16:05:50.042098 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 16:06:17 crc kubenswrapper[4731]: I1125 16:06:17.507378 4731 scope.go:117] "RemoveContainer" containerID="f4b01ce6effcff0ed6a2674bac92c7b53ade12b59118951ae844e56ffc63d095" Nov 25 16:06:17 crc kubenswrapper[4731]: I1125 16:06:17.540627 4731 scope.go:117] "RemoveContainer" containerID="73b0533e32d42e77a5647e0e0a7c5735221bb997b08abb9f9b7ccef4e1c6e67e" Nov 25 16:06:17 crc kubenswrapper[4731]: I1125 16:06:17.651295 4731 scope.go:117] "RemoveContainer" containerID="c4e81f8092148cef51eaaca27a24db0bf2fb7a9c32b7150eb1c57a79387b5bd4" Nov 25 16:06:20 crc kubenswrapper[4731]: I1125 16:06:20.039747 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 16:06:20 crc kubenswrapper[4731]: I1125 16:06:20.040383 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 16:06:50 crc kubenswrapper[4731]: I1125 16:06:50.040045 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 16:06:50 crc kubenswrapper[4731]: I1125 16:06:50.040493 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 16:06:50 crc kubenswrapper[4731]: I1125 16:06:50.040545 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 16:06:50 crc kubenswrapper[4731]: I1125 16:06:50.041314 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 16:06:50 crc kubenswrapper[4731]: I1125 16:06:50.041372 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" gracePeriod=600 Nov 25 16:06:50 crc kubenswrapper[4731]: E1125 16:06:50.235956 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:06:50 crc kubenswrapper[4731]: I1125 16:06:50.981653 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6v8nf"] Nov 25 16:06:50 crc kubenswrapper[4731]: E1125 16:06:50.987447 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53260cd4-5c2d-4f41-a325-3d1fff95df83" containerName="gather" Nov 25 16:06:50 crc kubenswrapper[4731]: I1125 16:06:50.987686 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="53260cd4-5c2d-4f41-a325-3d1fff95df83" containerName="gather" Nov 25 16:06:50 crc kubenswrapper[4731]: E1125 16:06:50.987835 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f376bca-97b3-4267-9139-366fc69fe99c" containerName="extract-utilities" Nov 25 16:06:50 crc kubenswrapper[4731]: I1125 16:06:50.987893 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f376bca-97b3-4267-9139-366fc69fe99c" containerName="extract-utilities" Nov 25 16:06:50 crc kubenswrapper[4731]: E1125 16:06:50.987968 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f376bca-97b3-4267-9139-366fc69fe99c" containerName="registry-server" Nov 25 16:06:50 crc kubenswrapper[4731]: I1125 16:06:50.988044 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f376bca-97b3-4267-9139-366fc69fe99c" containerName="registry-server" Nov 25 16:06:50 crc kubenswrapper[4731]: E1125 16:06:50.988144 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53260cd4-5c2d-4f41-a325-3d1fff95df83" containerName="copy" Nov 25 16:06:50 crc kubenswrapper[4731]: I1125 16:06:50.988219 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="53260cd4-5c2d-4f41-a325-3d1fff95df83" containerName="copy" Nov 25 16:06:50 crc kubenswrapper[4731]: E1125 16:06:50.988385 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f376bca-97b3-4267-9139-366fc69fe99c" containerName="extract-content" Nov 25 16:06:50 crc kubenswrapper[4731]: I1125 16:06:50.988481 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f376bca-97b3-4267-9139-366fc69fe99c" containerName="extract-content" Nov 25 16:06:50 crc kubenswrapper[4731]: I1125 16:06:50.989360 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="53260cd4-5c2d-4f41-a325-3d1fff95df83" containerName="copy" Nov 25 16:06:50 crc kubenswrapper[4731]: I1125 16:06:50.989487 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="53260cd4-5c2d-4f41-a325-3d1fff95df83" containerName="gather" Nov 25 16:06:50 crc kubenswrapper[4731]: I1125 16:06:50.989602 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f376bca-97b3-4267-9139-366fc69fe99c" containerName="registry-server" Nov 25 16:06:50 crc kubenswrapper[4731]: I1125 16:06:50.993972 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:06:50 crc kubenswrapper[4731]: I1125 16:06:50.997879 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6v8nf"] Nov 25 16:06:51 crc kubenswrapper[4731]: I1125 16:06:51.040119 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-catalog-content\") pod \"redhat-marketplace-6v8nf\" (UID: \"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8\") " pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:06:51 crc kubenswrapper[4731]: I1125 16:06:51.040352 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-utilities\") pod \"redhat-marketplace-6v8nf\" (UID: \"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8\") " pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:06:51 crc kubenswrapper[4731]: I1125 16:06:51.040566 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxlb7\" (UniqueName: \"kubernetes.io/projected/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-kube-api-access-zxlb7\") pod \"redhat-marketplace-6v8nf\" (UID: \"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8\") " pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:06:51 crc kubenswrapper[4731]: I1125 16:06:51.103485 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" exitCode=0 Nov 25 16:06:51 crc kubenswrapper[4731]: I1125 16:06:51.103534 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c"} Nov 25 16:06:51 crc kubenswrapper[4731]: I1125 16:06:51.103589 4731 scope.go:117] "RemoveContainer" containerID="483fad0053bc2b3bd7b0ec82c454f39b2e2e33b64ce6472c9ab41d7004caede0" Nov 25 16:06:51 crc kubenswrapper[4731]: I1125 16:06:51.104434 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:06:51 crc kubenswrapper[4731]: E1125 16:06:51.104926 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:06:51 crc kubenswrapper[4731]: I1125 16:06:51.142325 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxlb7\" (UniqueName: \"kubernetes.io/projected/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-kube-api-access-zxlb7\") pod \"redhat-marketplace-6v8nf\" (UID: \"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8\") " pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:06:51 crc kubenswrapper[4731]: I1125 16:06:51.142417 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-catalog-content\") pod \"redhat-marketplace-6v8nf\" (UID: \"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8\") " pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:06:51 crc kubenswrapper[4731]: I1125 16:06:51.142525 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-utilities\") pod \"redhat-marketplace-6v8nf\" (UID: \"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8\") " pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:06:51 crc kubenswrapper[4731]: I1125 16:06:51.143274 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-utilities\") pod \"redhat-marketplace-6v8nf\" (UID: \"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8\") " pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:06:51 crc kubenswrapper[4731]: I1125 16:06:51.143478 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-catalog-content\") pod \"redhat-marketplace-6v8nf\" (UID: \"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8\") " pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:06:51 crc kubenswrapper[4731]: I1125 16:06:51.183255 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxlb7\" (UniqueName: \"kubernetes.io/projected/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-kube-api-access-zxlb7\") pod \"redhat-marketplace-6v8nf\" (UID: \"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8\") " pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:06:51 crc kubenswrapper[4731]: I1125 16:06:51.349399 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:06:51 crc kubenswrapper[4731]: I1125 16:06:51.833240 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6v8nf"] Nov 25 16:06:52 crc kubenswrapper[4731]: I1125 16:06:52.117245 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v8nf" event={"ID":"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8","Type":"ContainerStarted","Data":"29ea2cf10d919d6a4fe37b474270ec1fb0414a45b850b4980542041d1c2aee46"} Nov 25 16:06:53 crc kubenswrapper[4731]: I1125 16:06:53.137491 4731 generic.go:334] "Generic (PLEG): container finished" podID="0b9bb165-f94f-4aff-bb6d-cdfee35e55e8" containerID="72c469dcf9e060b6b69142edf2315de43be8c662bc26b9382aeb21efe4acfd5f" exitCode=0 Nov 25 16:06:53 crc kubenswrapper[4731]: I1125 16:06:53.137584 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v8nf" event={"ID":"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8","Type":"ContainerDied","Data":"72c469dcf9e060b6b69142edf2315de43be8c662bc26b9382aeb21efe4acfd5f"} Nov 25 16:06:53 crc kubenswrapper[4731]: I1125 16:06:53.141868 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 16:06:55 crc kubenswrapper[4731]: I1125 16:06:55.158117 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v8nf" event={"ID":"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8","Type":"ContainerStarted","Data":"a62e8c0fe9efc15096df5f28f25e75c23b883c8eac2f2f172d110676303b9946"} Nov 25 16:06:57 crc kubenswrapper[4731]: I1125 16:06:57.180039 4731 generic.go:334] "Generic (PLEG): container finished" podID="0b9bb165-f94f-4aff-bb6d-cdfee35e55e8" containerID="a62e8c0fe9efc15096df5f28f25e75c23b883c8eac2f2f172d110676303b9946" exitCode=0 Nov 25 16:06:57 crc kubenswrapper[4731]: I1125 16:06:57.180101 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v8nf" event={"ID":"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8","Type":"ContainerDied","Data":"a62e8c0fe9efc15096df5f28f25e75c23b883c8eac2f2f172d110676303b9946"} Nov 25 16:06:59 crc kubenswrapper[4731]: I1125 16:06:59.208858 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v8nf" event={"ID":"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8","Type":"ContainerStarted","Data":"6b6122226cb3b625ddc370fd7573b15486d4dcf614cf29c2bceedc68119b9866"} Nov 25 16:06:59 crc kubenswrapper[4731]: I1125 16:06:59.237893 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6v8nf" podStartSLOduration=4.206193365 podStartE2EDuration="9.237872525s" podCreationTimestamp="2025-11-25 16:06:50 +0000 UTC" firstStartedPulling="2025-11-25 16:06:53.14149612 +0000 UTC m=+4371.888630550" lastFinishedPulling="2025-11-25 16:06:58.17317526 +0000 UTC m=+4376.920309710" observedRunningTime="2025-11-25 16:06:59.233419313 +0000 UTC m=+4377.980553743" watchObservedRunningTime="2025-11-25 16:06:59.237872525 +0000 UTC m=+4377.985006955" Nov 25 16:07:01 crc kubenswrapper[4731]: I1125 16:07:01.350044 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:07:01 crc kubenswrapper[4731]: I1125 16:07:01.350412 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:07:01 crc kubenswrapper[4731]: I1125 16:07:01.400164 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:07:04 crc kubenswrapper[4731]: I1125 16:07:04.812884 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:07:04 crc kubenswrapper[4731]: E1125 16:07:04.813796 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:07:11 crc kubenswrapper[4731]: I1125 16:07:11.398417 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:07:11 crc kubenswrapper[4731]: I1125 16:07:11.458498 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6v8nf"] Nov 25 16:07:12 crc kubenswrapper[4731]: I1125 16:07:12.334031 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6v8nf" podUID="0b9bb165-f94f-4aff-bb6d-cdfee35e55e8" containerName="registry-server" containerID="cri-o://6b6122226cb3b625ddc370fd7573b15486d4dcf614cf29c2bceedc68119b9866" gracePeriod=2 Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.315607 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.352197 4731 generic.go:334] "Generic (PLEG): container finished" podID="0b9bb165-f94f-4aff-bb6d-cdfee35e55e8" containerID="6b6122226cb3b625ddc370fd7573b15486d4dcf614cf29c2bceedc68119b9866" exitCode=0 Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.352263 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v8nf" event={"ID":"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8","Type":"ContainerDied","Data":"6b6122226cb3b625ddc370fd7573b15486d4dcf614cf29c2bceedc68119b9866"} Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.352286 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6v8nf" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.352312 4731 scope.go:117] "RemoveContainer" containerID="6b6122226cb3b625ddc370fd7573b15486d4dcf614cf29c2bceedc68119b9866" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.352298 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6v8nf" event={"ID":"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8","Type":"ContainerDied","Data":"29ea2cf10d919d6a4fe37b474270ec1fb0414a45b850b4980542041d1c2aee46"} Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.377948 4731 scope.go:117] "RemoveContainer" containerID="a62e8c0fe9efc15096df5f28f25e75c23b883c8eac2f2f172d110676303b9946" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.403768 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-utilities\") pod \"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8\" (UID: \"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8\") " Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.404009 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-catalog-content\") pod \"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8\" (UID: \"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8\") " Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.404088 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxlb7\" (UniqueName: \"kubernetes.io/projected/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-kube-api-access-zxlb7\") pod \"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8\" (UID: \"0b9bb165-f94f-4aff-bb6d-cdfee35e55e8\") " Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.404950 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-utilities" (OuterVolumeSpecName: "utilities") pod "0b9bb165-f94f-4aff-bb6d-cdfee35e55e8" (UID: "0b9bb165-f94f-4aff-bb6d-cdfee35e55e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.414337 4731 scope.go:117] "RemoveContainer" containerID="72c469dcf9e060b6b69142edf2315de43be8c662bc26b9382aeb21efe4acfd5f" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.418175 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-kube-api-access-zxlb7" (OuterVolumeSpecName: "kube-api-access-zxlb7") pod "0b9bb165-f94f-4aff-bb6d-cdfee35e55e8" (UID: "0b9bb165-f94f-4aff-bb6d-cdfee35e55e8"). InnerVolumeSpecName "kube-api-access-zxlb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.432299 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0b9bb165-f94f-4aff-bb6d-cdfee35e55e8" (UID: "0b9bb165-f94f-4aff-bb6d-cdfee35e55e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.497925 4731 scope.go:117] "RemoveContainer" containerID="6b6122226cb3b625ddc370fd7573b15486d4dcf614cf29c2bceedc68119b9866" Nov 25 16:07:13 crc kubenswrapper[4731]: E1125 16:07:13.499192 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b6122226cb3b625ddc370fd7573b15486d4dcf614cf29c2bceedc68119b9866\": container with ID starting with 6b6122226cb3b625ddc370fd7573b15486d4dcf614cf29c2bceedc68119b9866 not found: ID does not exist" containerID="6b6122226cb3b625ddc370fd7573b15486d4dcf614cf29c2bceedc68119b9866" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.499248 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b6122226cb3b625ddc370fd7573b15486d4dcf614cf29c2bceedc68119b9866"} err="failed to get container status \"6b6122226cb3b625ddc370fd7573b15486d4dcf614cf29c2bceedc68119b9866\": rpc error: code = NotFound desc = could not find container \"6b6122226cb3b625ddc370fd7573b15486d4dcf614cf29c2bceedc68119b9866\": container with ID starting with 6b6122226cb3b625ddc370fd7573b15486d4dcf614cf29c2bceedc68119b9866 not found: ID does not exist" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.499283 4731 scope.go:117] "RemoveContainer" containerID="a62e8c0fe9efc15096df5f28f25e75c23b883c8eac2f2f172d110676303b9946" Nov 25 16:07:13 crc kubenswrapper[4731]: E1125 16:07:13.499832 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a62e8c0fe9efc15096df5f28f25e75c23b883c8eac2f2f172d110676303b9946\": container with ID starting with a62e8c0fe9efc15096df5f28f25e75c23b883c8eac2f2f172d110676303b9946 not found: ID does not exist" containerID="a62e8c0fe9efc15096df5f28f25e75c23b883c8eac2f2f172d110676303b9946" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.499875 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a62e8c0fe9efc15096df5f28f25e75c23b883c8eac2f2f172d110676303b9946"} err="failed to get container status \"a62e8c0fe9efc15096df5f28f25e75c23b883c8eac2f2f172d110676303b9946\": rpc error: code = NotFound desc = could not find container \"a62e8c0fe9efc15096df5f28f25e75c23b883c8eac2f2f172d110676303b9946\": container with ID starting with a62e8c0fe9efc15096df5f28f25e75c23b883c8eac2f2f172d110676303b9946 not found: ID does not exist" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.499889 4731 scope.go:117] "RemoveContainer" containerID="72c469dcf9e060b6b69142edf2315de43be8c662bc26b9382aeb21efe4acfd5f" Nov 25 16:07:13 crc kubenswrapper[4731]: E1125 16:07:13.500270 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72c469dcf9e060b6b69142edf2315de43be8c662bc26b9382aeb21efe4acfd5f\": container with ID starting with 72c469dcf9e060b6b69142edf2315de43be8c662bc26b9382aeb21efe4acfd5f not found: ID does not exist" containerID="72c469dcf9e060b6b69142edf2315de43be8c662bc26b9382aeb21efe4acfd5f" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.500298 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72c469dcf9e060b6b69142edf2315de43be8c662bc26b9382aeb21efe4acfd5f"} err="failed to get container status \"72c469dcf9e060b6b69142edf2315de43be8c662bc26b9382aeb21efe4acfd5f\": rpc error: code = NotFound desc = could not find container \"72c469dcf9e060b6b69142edf2315de43be8c662bc26b9382aeb21efe4acfd5f\": container with ID starting with 72c469dcf9e060b6b69142edf2315de43be8c662bc26b9382aeb21efe4acfd5f not found: ID does not exist" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.507168 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.507203 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.507217 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxlb7\" (UniqueName: \"kubernetes.io/projected/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8-kube-api-access-zxlb7\") on node \"crc\" DevicePath \"\"" Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.695052 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6v8nf"] Nov 25 16:07:13 crc kubenswrapper[4731]: I1125 16:07:13.704900 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6v8nf"] Nov 25 16:07:14 crc kubenswrapper[4731]: I1125 16:07:14.823910 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b9bb165-f94f-4aff-bb6d-cdfee35e55e8" path="/var/lib/kubelet/pods/0b9bb165-f94f-4aff-bb6d-cdfee35e55e8/volumes" Nov 25 16:07:17 crc kubenswrapper[4731]: I1125 16:07:17.722762 4731 scope.go:117] "RemoveContainer" containerID="fd5c8f7dcb85977d44110059f3b3cceecf780ffd11f19b78bcb739246d8f2f6b" Nov 25 16:07:17 crc kubenswrapper[4731]: I1125 16:07:17.742933 4731 scope.go:117] "RemoveContainer" containerID="0a37e6b4cab382950264a82dc4eb8339d660d5af68b988d9acbf336dd86d19fd" Nov 25 16:07:17 crc kubenswrapper[4731]: I1125 16:07:17.772064 4731 scope.go:117] "RemoveContainer" containerID="8c1cc3e247ed2f84e3f0689ccc4e4bab377e4f32f82623e359fc791bd886ff7e" Nov 25 16:07:19 crc kubenswrapper[4731]: I1125 16:07:19.812097 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:07:19 crc kubenswrapper[4731]: E1125 16:07:19.812643 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:07:31 crc kubenswrapper[4731]: I1125 16:07:31.812896 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:07:31 crc kubenswrapper[4731]: E1125 16:07:31.813801 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:07:46 crc kubenswrapper[4731]: I1125 16:07:46.812506 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:07:46 crc kubenswrapper[4731]: E1125 16:07:46.813313 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:07:58 crc kubenswrapper[4731]: I1125 16:07:58.814335 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:07:58 crc kubenswrapper[4731]: E1125 16:07:58.815189 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:08:12 crc kubenswrapper[4731]: I1125 16:08:12.822175 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:08:12 crc kubenswrapper[4731]: E1125 16:08:12.825789 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:08:25 crc kubenswrapper[4731]: I1125 16:08:25.812507 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:08:25 crc kubenswrapper[4731]: E1125 16:08:25.813277 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.519605 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-56bpd/must-gather-t54tm"] Nov 25 16:08:27 crc kubenswrapper[4731]: E1125 16:08:27.521520 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b9bb165-f94f-4aff-bb6d-cdfee35e55e8" containerName="extract-utilities" Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.521632 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9bb165-f94f-4aff-bb6d-cdfee35e55e8" containerName="extract-utilities" Nov 25 16:08:27 crc kubenswrapper[4731]: E1125 16:08:27.521778 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b9bb165-f94f-4aff-bb6d-cdfee35e55e8" containerName="registry-server" Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.521891 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9bb165-f94f-4aff-bb6d-cdfee35e55e8" containerName="registry-server" Nov 25 16:08:27 crc kubenswrapper[4731]: E1125 16:08:27.521994 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b9bb165-f94f-4aff-bb6d-cdfee35e55e8" containerName="extract-content" Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.522110 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b9bb165-f94f-4aff-bb6d-cdfee35e55e8" containerName="extract-content" Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.522452 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b9bb165-f94f-4aff-bb6d-cdfee35e55e8" containerName="registry-server" Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.525161 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-56bpd/must-gather-t54tm" Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.529168 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-56bpd"/"kube-root-ca.crt" Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.530015 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-56bpd"/"openshift-service-ca.crt" Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.530271 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-56bpd"/"default-dockercfg-j5t48" Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.556126 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-56bpd/must-gather-t54tm"] Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.612046 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t74lm\" (UniqueName: \"kubernetes.io/projected/9c0853b3-8070-4067-8913-b8064b427cf5-kube-api-access-t74lm\") pod \"must-gather-t54tm\" (UID: \"9c0853b3-8070-4067-8913-b8064b427cf5\") " pod="openshift-must-gather-56bpd/must-gather-t54tm" Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.612156 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9c0853b3-8070-4067-8913-b8064b427cf5-must-gather-output\") pod \"must-gather-t54tm\" (UID: \"9c0853b3-8070-4067-8913-b8064b427cf5\") " pod="openshift-must-gather-56bpd/must-gather-t54tm" Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.714684 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t74lm\" (UniqueName: \"kubernetes.io/projected/9c0853b3-8070-4067-8913-b8064b427cf5-kube-api-access-t74lm\") pod \"must-gather-t54tm\" (UID: \"9c0853b3-8070-4067-8913-b8064b427cf5\") " pod="openshift-must-gather-56bpd/must-gather-t54tm" Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.714771 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9c0853b3-8070-4067-8913-b8064b427cf5-must-gather-output\") pod \"must-gather-t54tm\" (UID: \"9c0853b3-8070-4067-8913-b8064b427cf5\") " pod="openshift-must-gather-56bpd/must-gather-t54tm" Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.715395 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9c0853b3-8070-4067-8913-b8064b427cf5-must-gather-output\") pod \"must-gather-t54tm\" (UID: \"9c0853b3-8070-4067-8913-b8064b427cf5\") " pod="openshift-must-gather-56bpd/must-gather-t54tm" Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.754562 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t74lm\" (UniqueName: \"kubernetes.io/projected/9c0853b3-8070-4067-8913-b8064b427cf5-kube-api-access-t74lm\") pod \"must-gather-t54tm\" (UID: \"9c0853b3-8070-4067-8913-b8064b427cf5\") " pod="openshift-must-gather-56bpd/must-gather-t54tm" Nov 25 16:08:27 crc kubenswrapper[4731]: I1125 16:08:27.860536 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-56bpd/must-gather-t54tm" Nov 25 16:08:28 crc kubenswrapper[4731]: I1125 16:08:28.426639 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-56bpd/must-gather-t54tm"] Nov 25 16:08:29 crc kubenswrapper[4731]: I1125 16:08:29.169900 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-56bpd/must-gather-t54tm" event={"ID":"9c0853b3-8070-4067-8913-b8064b427cf5","Type":"ContainerStarted","Data":"e62900a4ff2c5fba699a38e0917e12c5a48025f42d01ab778ee571c179567ad3"} Nov 25 16:08:30 crc kubenswrapper[4731]: I1125 16:08:30.187604 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-56bpd/must-gather-t54tm" event={"ID":"9c0853b3-8070-4067-8913-b8064b427cf5","Type":"ContainerStarted","Data":"4fd01b81094a0487041d6c715b18254dc2c79d0769f7d2e8def46627910b3163"} Nov 25 16:08:31 crc kubenswrapper[4731]: I1125 16:08:31.203920 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-56bpd/must-gather-t54tm" event={"ID":"9c0853b3-8070-4067-8913-b8064b427cf5","Type":"ContainerStarted","Data":"844a35b3ad2432300fdfb6625a44aff6ca2ebe0d22fb88738de44a06d19c3c1a"} Nov 25 16:08:31 crc kubenswrapper[4731]: I1125 16:08:31.222807 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-56bpd/must-gather-t54tm" podStartSLOduration=4.222782971 podStartE2EDuration="4.222782971s" podCreationTimestamp="2025-11-25 16:08:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 16:08:31.217202832 +0000 UTC m=+4469.964337272" watchObservedRunningTime="2025-11-25 16:08:31.222782971 +0000 UTC m=+4469.969917401" Nov 25 16:08:33 crc kubenswrapper[4731]: I1125 16:08:33.696149 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-56bpd/crc-debug-mmbvk"] Nov 25 16:08:33 crc kubenswrapper[4731]: I1125 16:08:33.698477 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-56bpd/crc-debug-mmbvk" Nov 25 16:08:33 crc kubenswrapper[4731]: I1125 16:08:33.867677 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98vsf\" (UniqueName: \"kubernetes.io/projected/a19ad4be-2deb-46d8-a83c-bede9ab7c510-kube-api-access-98vsf\") pod \"crc-debug-mmbvk\" (UID: \"a19ad4be-2deb-46d8-a83c-bede9ab7c510\") " pod="openshift-must-gather-56bpd/crc-debug-mmbvk" Nov 25 16:08:33 crc kubenswrapper[4731]: I1125 16:08:33.867908 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a19ad4be-2deb-46d8-a83c-bede9ab7c510-host\") pod \"crc-debug-mmbvk\" (UID: \"a19ad4be-2deb-46d8-a83c-bede9ab7c510\") " pod="openshift-must-gather-56bpd/crc-debug-mmbvk" Nov 25 16:08:33 crc kubenswrapper[4731]: I1125 16:08:33.970063 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a19ad4be-2deb-46d8-a83c-bede9ab7c510-host\") pod \"crc-debug-mmbvk\" (UID: \"a19ad4be-2deb-46d8-a83c-bede9ab7c510\") " pod="openshift-must-gather-56bpd/crc-debug-mmbvk" Nov 25 16:08:33 crc kubenswrapper[4731]: I1125 16:08:33.970260 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98vsf\" (UniqueName: \"kubernetes.io/projected/a19ad4be-2deb-46d8-a83c-bede9ab7c510-kube-api-access-98vsf\") pod \"crc-debug-mmbvk\" (UID: \"a19ad4be-2deb-46d8-a83c-bede9ab7c510\") " pod="openshift-must-gather-56bpd/crc-debug-mmbvk" Nov 25 16:08:33 crc kubenswrapper[4731]: I1125 16:08:33.970268 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a19ad4be-2deb-46d8-a83c-bede9ab7c510-host\") pod \"crc-debug-mmbvk\" (UID: \"a19ad4be-2deb-46d8-a83c-bede9ab7c510\") " pod="openshift-must-gather-56bpd/crc-debug-mmbvk" Nov 25 16:08:33 crc kubenswrapper[4731]: I1125 16:08:33.994718 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98vsf\" (UniqueName: \"kubernetes.io/projected/a19ad4be-2deb-46d8-a83c-bede9ab7c510-kube-api-access-98vsf\") pod \"crc-debug-mmbvk\" (UID: \"a19ad4be-2deb-46d8-a83c-bede9ab7c510\") " pod="openshift-must-gather-56bpd/crc-debug-mmbvk" Nov 25 16:08:34 crc kubenswrapper[4731]: I1125 16:08:34.028047 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-56bpd/crc-debug-mmbvk" Nov 25 16:08:34 crc kubenswrapper[4731]: W1125 16:08:34.067087 4731 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda19ad4be_2deb_46d8_a83c_bede9ab7c510.slice/crio-c03d8ad13a330fc37ebdb6272458a024283b6360930594b635f54765a1036a5e WatchSource:0}: Error finding container c03d8ad13a330fc37ebdb6272458a024283b6360930594b635f54765a1036a5e: Status 404 returned error can't find the container with id c03d8ad13a330fc37ebdb6272458a024283b6360930594b635f54765a1036a5e Nov 25 16:08:34 crc kubenswrapper[4731]: I1125 16:08:34.245049 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-56bpd/crc-debug-mmbvk" event={"ID":"a19ad4be-2deb-46d8-a83c-bede9ab7c510","Type":"ContainerStarted","Data":"c03d8ad13a330fc37ebdb6272458a024283b6360930594b635f54765a1036a5e"} Nov 25 16:08:35 crc kubenswrapper[4731]: I1125 16:08:35.256511 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-56bpd/crc-debug-mmbvk" event={"ID":"a19ad4be-2deb-46d8-a83c-bede9ab7c510","Type":"ContainerStarted","Data":"44943f513f17e84f388f4d958ea5674a57704787285db3c478582df9e1619df8"} Nov 25 16:08:35 crc kubenswrapper[4731]: I1125 16:08:35.291901 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-56bpd/crc-debug-mmbvk" podStartSLOduration=2.291881182 podStartE2EDuration="2.291881182s" podCreationTimestamp="2025-11-25 16:08:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 16:08:35.276461607 +0000 UTC m=+4474.023596037" watchObservedRunningTime="2025-11-25 16:08:35.291881182 +0000 UTC m=+4474.039015632" Nov 25 16:08:38 crc kubenswrapper[4731]: I1125 16:08:38.811529 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:08:38 crc kubenswrapper[4731]: E1125 16:08:38.812481 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:08:48 crc kubenswrapper[4731]: I1125 16:08:48.386529 4731 generic.go:334] "Generic (PLEG): container finished" podID="a19ad4be-2deb-46d8-a83c-bede9ab7c510" containerID="44943f513f17e84f388f4d958ea5674a57704787285db3c478582df9e1619df8" exitCode=0 Nov 25 16:08:48 crc kubenswrapper[4731]: I1125 16:08:48.386640 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-56bpd/crc-debug-mmbvk" event={"ID":"a19ad4be-2deb-46d8-a83c-bede9ab7c510","Type":"ContainerDied","Data":"44943f513f17e84f388f4d958ea5674a57704787285db3c478582df9e1619df8"} Nov 25 16:08:49 crc kubenswrapper[4731]: I1125 16:08:49.525734 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-56bpd/crc-debug-mmbvk" Nov 25 16:08:49 crc kubenswrapper[4731]: I1125 16:08:49.578908 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-56bpd/crc-debug-mmbvk"] Nov 25 16:08:49 crc kubenswrapper[4731]: I1125 16:08:49.590193 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-56bpd/crc-debug-mmbvk"] Nov 25 16:08:49 crc kubenswrapper[4731]: I1125 16:08:49.615203 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a19ad4be-2deb-46d8-a83c-bede9ab7c510-host\") pod \"a19ad4be-2deb-46d8-a83c-bede9ab7c510\" (UID: \"a19ad4be-2deb-46d8-a83c-bede9ab7c510\") " Nov 25 16:08:49 crc kubenswrapper[4731]: I1125 16:08:49.615439 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a19ad4be-2deb-46d8-a83c-bede9ab7c510-host" (OuterVolumeSpecName: "host") pod "a19ad4be-2deb-46d8-a83c-bede9ab7c510" (UID: "a19ad4be-2deb-46d8-a83c-bede9ab7c510"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 16:08:49 crc kubenswrapper[4731]: I1125 16:08:49.615487 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98vsf\" (UniqueName: \"kubernetes.io/projected/a19ad4be-2deb-46d8-a83c-bede9ab7c510-kube-api-access-98vsf\") pod \"a19ad4be-2deb-46d8-a83c-bede9ab7c510\" (UID: \"a19ad4be-2deb-46d8-a83c-bede9ab7c510\") " Nov 25 16:08:49 crc kubenswrapper[4731]: I1125 16:08:49.615942 4731 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a19ad4be-2deb-46d8-a83c-bede9ab7c510-host\") on node \"crc\" DevicePath \"\"" Nov 25 16:08:49 crc kubenswrapper[4731]: I1125 16:08:49.622059 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a19ad4be-2deb-46d8-a83c-bede9ab7c510-kube-api-access-98vsf" (OuterVolumeSpecName: "kube-api-access-98vsf") pod "a19ad4be-2deb-46d8-a83c-bede9ab7c510" (UID: "a19ad4be-2deb-46d8-a83c-bede9ab7c510"). InnerVolumeSpecName "kube-api-access-98vsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:08:49 crc kubenswrapper[4731]: I1125 16:08:49.717331 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98vsf\" (UniqueName: \"kubernetes.io/projected/a19ad4be-2deb-46d8-a83c-bede9ab7c510-kube-api-access-98vsf\") on node \"crc\" DevicePath \"\"" Nov 25 16:08:50 crc kubenswrapper[4731]: I1125 16:08:50.407758 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c03d8ad13a330fc37ebdb6272458a024283b6360930594b635f54765a1036a5e" Nov 25 16:08:50 crc kubenswrapper[4731]: I1125 16:08:50.407866 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-56bpd/crc-debug-mmbvk" Nov 25 16:08:50 crc kubenswrapper[4731]: I1125 16:08:50.749656 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-56bpd/crc-debug-6z6f7"] Nov 25 16:08:50 crc kubenswrapper[4731]: E1125 16:08:50.750161 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a19ad4be-2deb-46d8-a83c-bede9ab7c510" containerName="container-00" Nov 25 16:08:50 crc kubenswrapper[4731]: I1125 16:08:50.750178 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="a19ad4be-2deb-46d8-a83c-bede9ab7c510" containerName="container-00" Nov 25 16:08:50 crc kubenswrapper[4731]: I1125 16:08:50.750403 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="a19ad4be-2deb-46d8-a83c-bede9ab7c510" containerName="container-00" Nov 25 16:08:50 crc kubenswrapper[4731]: I1125 16:08:50.751140 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-56bpd/crc-debug-6z6f7" Nov 25 16:08:50 crc kubenswrapper[4731]: I1125 16:08:50.824161 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a19ad4be-2deb-46d8-a83c-bede9ab7c510" path="/var/lib/kubelet/pods/a19ad4be-2deb-46d8-a83c-bede9ab7c510/volumes" Nov 25 16:08:50 crc kubenswrapper[4731]: I1125 16:08:50.841688 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cee542f9-2506-408e-b0fc-9e72193bacf5-host\") pod \"crc-debug-6z6f7\" (UID: \"cee542f9-2506-408e-b0fc-9e72193bacf5\") " pod="openshift-must-gather-56bpd/crc-debug-6z6f7" Nov 25 16:08:50 crc kubenswrapper[4731]: I1125 16:08:50.841751 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ch9c\" (UniqueName: \"kubernetes.io/projected/cee542f9-2506-408e-b0fc-9e72193bacf5-kube-api-access-8ch9c\") pod \"crc-debug-6z6f7\" (UID: \"cee542f9-2506-408e-b0fc-9e72193bacf5\") " pod="openshift-must-gather-56bpd/crc-debug-6z6f7" Nov 25 16:08:50 crc kubenswrapper[4731]: I1125 16:08:50.943746 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cee542f9-2506-408e-b0fc-9e72193bacf5-host\") pod \"crc-debug-6z6f7\" (UID: \"cee542f9-2506-408e-b0fc-9e72193bacf5\") " pod="openshift-must-gather-56bpd/crc-debug-6z6f7" Nov 25 16:08:50 crc kubenswrapper[4731]: I1125 16:08:50.943890 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ch9c\" (UniqueName: \"kubernetes.io/projected/cee542f9-2506-408e-b0fc-9e72193bacf5-kube-api-access-8ch9c\") pod \"crc-debug-6z6f7\" (UID: \"cee542f9-2506-408e-b0fc-9e72193bacf5\") " pod="openshift-must-gather-56bpd/crc-debug-6z6f7" Nov 25 16:08:50 crc kubenswrapper[4731]: I1125 16:08:50.943950 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cee542f9-2506-408e-b0fc-9e72193bacf5-host\") pod \"crc-debug-6z6f7\" (UID: \"cee542f9-2506-408e-b0fc-9e72193bacf5\") " pod="openshift-must-gather-56bpd/crc-debug-6z6f7" Nov 25 16:08:50 crc kubenswrapper[4731]: I1125 16:08:50.972028 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ch9c\" (UniqueName: \"kubernetes.io/projected/cee542f9-2506-408e-b0fc-9e72193bacf5-kube-api-access-8ch9c\") pod \"crc-debug-6z6f7\" (UID: \"cee542f9-2506-408e-b0fc-9e72193bacf5\") " pod="openshift-must-gather-56bpd/crc-debug-6z6f7" Nov 25 16:08:51 crc kubenswrapper[4731]: I1125 16:08:51.071033 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-56bpd/crc-debug-6z6f7" Nov 25 16:08:51 crc kubenswrapper[4731]: I1125 16:08:51.418909 4731 generic.go:334] "Generic (PLEG): container finished" podID="cee542f9-2506-408e-b0fc-9e72193bacf5" containerID="991efd226915d337a10c83cf77d5f15d3a4e021b563c7bd2a0e09e0e13614f40" exitCode=1 Nov 25 16:08:51 crc kubenswrapper[4731]: I1125 16:08:51.418981 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-56bpd/crc-debug-6z6f7" event={"ID":"cee542f9-2506-408e-b0fc-9e72193bacf5","Type":"ContainerDied","Data":"991efd226915d337a10c83cf77d5f15d3a4e021b563c7bd2a0e09e0e13614f40"} Nov 25 16:08:51 crc kubenswrapper[4731]: I1125 16:08:51.419186 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-56bpd/crc-debug-6z6f7" event={"ID":"cee542f9-2506-408e-b0fc-9e72193bacf5","Type":"ContainerStarted","Data":"3dfd9f70ed3234ed62221cbc17a2c33ac100a0073526e414ae43510e31c5ea33"} Nov 25 16:08:51 crc kubenswrapper[4731]: I1125 16:08:51.462544 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-56bpd/crc-debug-6z6f7"] Nov 25 16:08:51 crc kubenswrapper[4731]: I1125 16:08:51.475478 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-56bpd/crc-debug-6z6f7"] Nov 25 16:08:52 crc kubenswrapper[4731]: I1125 16:08:52.818921 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:08:52 crc kubenswrapper[4731]: E1125 16:08:52.819490 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:08:53 crc kubenswrapper[4731]: I1125 16:08:53.069059 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-56bpd/crc-debug-6z6f7" Nov 25 16:08:53 crc kubenswrapper[4731]: I1125 16:08:53.188171 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cee542f9-2506-408e-b0fc-9e72193bacf5-host\") pod \"cee542f9-2506-408e-b0fc-9e72193bacf5\" (UID: \"cee542f9-2506-408e-b0fc-9e72193bacf5\") " Nov 25 16:08:53 crc kubenswrapper[4731]: I1125 16:08:53.188301 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cee542f9-2506-408e-b0fc-9e72193bacf5-host" (OuterVolumeSpecName: "host") pod "cee542f9-2506-408e-b0fc-9e72193bacf5" (UID: "cee542f9-2506-408e-b0fc-9e72193bacf5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Nov 25 16:08:53 crc kubenswrapper[4731]: I1125 16:08:53.188397 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ch9c\" (UniqueName: \"kubernetes.io/projected/cee542f9-2506-408e-b0fc-9e72193bacf5-kube-api-access-8ch9c\") pod \"cee542f9-2506-408e-b0fc-9e72193bacf5\" (UID: \"cee542f9-2506-408e-b0fc-9e72193bacf5\") " Nov 25 16:08:53 crc kubenswrapper[4731]: I1125 16:08:53.188882 4731 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/cee542f9-2506-408e-b0fc-9e72193bacf5-host\") on node \"crc\" DevicePath \"\"" Nov 25 16:08:53 crc kubenswrapper[4731]: I1125 16:08:53.198569 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee542f9-2506-408e-b0fc-9e72193bacf5-kube-api-access-8ch9c" (OuterVolumeSpecName: "kube-api-access-8ch9c") pod "cee542f9-2506-408e-b0fc-9e72193bacf5" (UID: "cee542f9-2506-408e-b0fc-9e72193bacf5"). InnerVolumeSpecName "kube-api-access-8ch9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:08:53 crc kubenswrapper[4731]: I1125 16:08:53.291792 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ch9c\" (UniqueName: \"kubernetes.io/projected/cee542f9-2506-408e-b0fc-9e72193bacf5-kube-api-access-8ch9c\") on node \"crc\" DevicePath \"\"" Nov 25 16:08:53 crc kubenswrapper[4731]: I1125 16:08:53.439338 4731 scope.go:117] "RemoveContainer" containerID="991efd226915d337a10c83cf77d5f15d3a4e021b563c7bd2a0e09e0e13614f40" Nov 25 16:08:53 crc kubenswrapper[4731]: I1125 16:08:53.439550 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-56bpd/crc-debug-6z6f7" Nov 25 16:08:54 crc kubenswrapper[4731]: I1125 16:08:54.824867 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cee542f9-2506-408e-b0fc-9e72193bacf5" path="/var/lib/kubelet/pods/cee542f9-2506-408e-b0fc-9e72193bacf5/volumes" Nov 25 16:09:06 crc kubenswrapper[4731]: I1125 16:09:06.812033 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:09:06 crc kubenswrapper[4731]: E1125 16:09:06.812858 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:09:21 crc kubenswrapper[4731]: I1125 16:09:21.811433 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:09:21 crc kubenswrapper[4731]: E1125 16:09:21.812196 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:09:34 crc kubenswrapper[4731]: I1125 16:09:34.811216 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:09:34 crc kubenswrapper[4731]: E1125 16:09:34.811932 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:09:47 crc kubenswrapper[4731]: I1125 16:09:47.811459 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:09:47 crc kubenswrapper[4731]: E1125 16:09:47.812440 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:09:59 crc kubenswrapper[4731]: I1125 16:09:59.811285 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:09:59 crc kubenswrapper[4731]: E1125 16:09:59.812169 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:10:03 crc kubenswrapper[4731]: I1125 16:10:03.322154 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a/init-config-reloader/0.log" Nov 25 16:10:03 crc kubenswrapper[4731]: I1125 16:10:03.557975 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a/alertmanager/0.log" Nov 25 16:10:03 crc kubenswrapper[4731]: I1125 16:10:03.606638 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a/init-config-reloader/0.log" Nov 25 16:10:03 crc kubenswrapper[4731]: I1125 16:10:03.638143 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_7a85bf4e-9eca-4bdb-9194-0aa8a6380a7a/config-reloader/0.log" Nov 25 16:10:03 crc kubenswrapper[4731]: I1125 16:10:03.748179 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e8bbb605-448a-4d7a-a7b1-55d3cacec4f7/aodh-api/0.log" Nov 25 16:10:03 crc kubenswrapper[4731]: I1125 16:10:03.808852 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e8bbb605-448a-4d7a-a7b1-55d3cacec4f7/aodh-evaluator/0.log" Nov 25 16:10:03 crc kubenswrapper[4731]: I1125 16:10:03.864666 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e8bbb605-448a-4d7a-a7b1-55d3cacec4f7/aodh-listener/0.log" Nov 25 16:10:03 crc kubenswrapper[4731]: I1125 16:10:03.877078 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_e8bbb605-448a-4d7a-a7b1-55d3cacec4f7/aodh-notifier/0.log" Nov 25 16:10:04 crc kubenswrapper[4731]: I1125 16:10:04.015950 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6dc4f8989b-lbfrn_5b11609c-ee96-473a-b1e6-dbbf07a9bab2/barbican-api/0.log" Nov 25 16:10:04 crc kubenswrapper[4731]: I1125 16:10:04.081887 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6dc4f8989b-lbfrn_5b11609c-ee96-473a-b1e6-dbbf07a9bab2/barbican-api-log/0.log" Nov 25 16:10:04 crc kubenswrapper[4731]: I1125 16:10:04.260543 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5c7d6d4554-f24zm_b8ba4d2e-6323-48b0-b781-313cab464832/barbican-keystone-listener/0.log" Nov 25 16:10:04 crc kubenswrapper[4731]: I1125 16:10:04.293833 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-5c7d6d4554-f24zm_b8ba4d2e-6323-48b0-b781-313cab464832/barbican-keystone-listener-log/0.log" Nov 25 16:10:04 crc kubenswrapper[4731]: I1125 16:10:04.368965 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dd99f9879-qskfg_4493c299-fefe-4934-b370-f2270a1b6226/barbican-worker/0.log" Nov 25 16:10:05 crc kubenswrapper[4731]: I1125 16:10:05.050371 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-7dd99f9879-qskfg_4493c299-fefe-4934-b370-f2270a1b6226/barbican-worker-log/0.log" Nov 25 16:10:05 crc kubenswrapper[4731]: I1125 16:10:05.077853 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-bghjf_141271e2-9df3-449f-9590-6d7c43c8563b/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:10:05 crc kubenswrapper[4731]: I1125 16:10:05.243523 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2d6f5688-5558-47ce-87a9-01d3f428c541/ceilometer-central-agent/0.log" Nov 25 16:10:05 crc kubenswrapper[4731]: I1125 16:10:05.285692 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2d6f5688-5558-47ce-87a9-01d3f428c541/proxy-httpd/0.log" Nov 25 16:10:05 crc kubenswrapper[4731]: I1125 16:10:05.304482 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2d6f5688-5558-47ce-87a9-01d3f428c541/ceilometer-notification-agent/0.log" Nov 25 16:10:05 crc kubenswrapper[4731]: I1125 16:10:05.319591 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_2d6f5688-5558-47ce-87a9-01d3f428c541/sg-core/0.log" Nov 25 16:10:05 crc kubenswrapper[4731]: I1125 16:10:05.576714 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3/cinder-api-log/0.log" Nov 25 16:10:05 crc kubenswrapper[4731]: I1125 16:10:05.598705 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_a95c46d5-eb91-4400-ae4a-ec69ddfdb8e3/cinder-api/0.log" Nov 25 16:10:05 crc kubenswrapper[4731]: I1125 16:10:05.760801 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_23096767-3b8d-4153-8d68-0285b66b2b1b/cinder-scheduler/0.log" Nov 25 16:10:05 crc kubenswrapper[4731]: I1125 16:10:05.821000 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_23096767-3b8d-4153-8d68-0285b66b2b1b/probe/0.log" Nov 25 16:10:05 crc kubenswrapper[4731]: I1125 16:10:05.825271 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-4bmf5_75b68a16-e019-49f8-89b7-daa43f7c1fa0/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:10:06 crc kubenswrapper[4731]: I1125 16:10:06.024893 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-zzlxd_6c7d1dc4-dbc1-4d90-a697-4c4600bb041e/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:10:06 crc kubenswrapper[4731]: I1125 16:10:06.097432 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-n9ptm_fb99c57e-0890-4b05-9072-311048298a07/init/0.log" Nov 25 16:10:06 crc kubenswrapper[4731]: I1125 16:10:06.981010 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-n9ptm_fb99c57e-0890-4b05-9072-311048298a07/dnsmasq-dns/0.log" Nov 25 16:10:06 crc kubenswrapper[4731]: I1125 16:10:06.984028 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-n9ptm_fb99c57e-0890-4b05-9072-311048298a07/init/0.log" Nov 25 16:10:06 crc kubenswrapper[4731]: I1125 16:10:06.986345 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-bp6tm_0e902f4e-eb88-45a6-8150-73c6bce43b9f/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:10:07 crc kubenswrapper[4731]: I1125 16:10:07.184882 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f6e0de1e-5e50-468c-a911-41b1c3e7261a/glance-log/0.log" Nov 25 16:10:07 crc kubenswrapper[4731]: I1125 16:10:07.188350 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f6e0de1e-5e50-468c-a911-41b1c3e7261a/glance-httpd/0.log" Nov 25 16:10:07 crc kubenswrapper[4731]: I1125 16:10:07.246060 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b713395b-6feb-411a-b13b-d35890a8b2d3/glance-httpd/0.log" Nov 25 16:10:07 crc kubenswrapper[4731]: I1125 16:10:07.266788 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b713395b-6feb-411a-b13b-d35890a8b2d3/glance-log/0.log" Nov 25 16:10:07 crc kubenswrapper[4731]: I1125 16:10:07.696409 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-574677c86-dzlj9_21056706-0226-48f6-bbe2-5a8c5fb49c7d/heat-engine/0.log" Nov 25 16:10:07 crc kubenswrapper[4731]: I1125 16:10:07.847459 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-kcj2g_deaabd87-d775-4f26-bf8e-c515967c2867/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:10:07 crc kubenswrapper[4731]: I1125 16:10:07.949665 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-cdf8597bf-t8vdc_c7de698f-a9bd-4231-9cee-dd0c1c93e461/heat-api/0.log" Nov 25 16:10:08 crc kubenswrapper[4731]: I1125 16:10:08.000874 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-8cf475f5-wpg2b_f49ff135-870b-4b14-8ec3-a20bec8b9ba6/heat-cfnapi/0.log" Nov 25 16:10:08 crc kubenswrapper[4731]: I1125 16:10:08.054760 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-lls57_026d447e-1194-4316-b484-6525a214f761/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:10:08 crc kubenswrapper[4731]: I1125 16:10:08.229001 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29401441-6mw2n_c1605aa1-c529-4c48-ac65-895048226e49/keystone-cron/0.log" Nov 25 16:10:08 crc kubenswrapper[4731]: I1125 16:10:08.269592 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-65d859dcf6-bblhv_3f147555-b406-4784-8ee3-b25c2521ce34/keystone-api/0.log" Nov 25 16:10:08 crc kubenswrapper[4731]: I1125 16:10:08.345727 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_57141195-3615-4e7f-b643-96c727cb9070/kube-state-metrics/0.log" Nov 25 16:10:08 crc kubenswrapper[4731]: I1125 16:10:08.532019 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-gxthr_90c7443c-b0ed-4432-8a15-f1d26a2161e9/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:10:08 crc kubenswrapper[4731]: I1125 16:10:08.822263 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-545595f45c-m798f_e70199f9-3fcf-4b0f-98c5-c45faf62c604/neutron-httpd/0.log" Nov 25 16:10:08 crc kubenswrapper[4731]: I1125 16:10:08.831500 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-545595f45c-m798f_e70199f9-3fcf-4b0f-98c5-c45faf62c604/neutron-api/0.log" Nov 25 16:10:08 crc kubenswrapper[4731]: I1125 16:10:08.924167 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-2mlk6_62e629fe-ccc6-4323-b226-1bc0b4591c2a/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:10:09 crc kubenswrapper[4731]: I1125 16:10:09.305784 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f/nova-api-log/0.log" Nov 25 16:10:09 crc kubenswrapper[4731]: I1125 16:10:09.503064 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_5cc27bf9-bdd0-43ec-8a78-6b0a8313d8b3/nova-cell0-conductor-conductor/0.log" Nov 25 16:10:09 crc kubenswrapper[4731]: I1125 16:10:09.706065 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_cfc2b506-8d10-4cfc-ae43-0978926809ac/nova-cell1-conductor-conductor/0.log" Nov 25 16:10:09 crc kubenswrapper[4731]: I1125 16:10:09.761951 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_b53f7264-a27b-4ea3-9c4a-2ff8905c2a2f/nova-api-api/0.log" Nov 25 16:10:09 crc kubenswrapper[4731]: I1125 16:10:09.905185 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_d1ab61a1-85f7-4bf9-93f8-598cc80c8283/nova-cell1-novncproxy-novncproxy/0.log" Nov 25 16:10:09 crc kubenswrapper[4731]: I1125 16:10:09.953711 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-9jzw8_6ca6a8fb-1ee3-40d3-b8c7-733438b20c35/nova-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:10:10 crc kubenswrapper[4731]: I1125 16:10:10.276312 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_209fb0e3-5c2d-4e1d-b735-432bc80aa947/nova-metadata-log/0.log" Nov 25 16:10:10 crc kubenswrapper[4731]: I1125 16:10:10.461393 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_250bffb8-9700-49f5-a1a2-39577b5bcba3/nova-scheduler-scheduler/0.log" Nov 25 16:10:10 crc kubenswrapper[4731]: I1125 16:10:10.540400 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_600ccfb1-2789-4b20-8012-9b6bdbf836ff/mysql-bootstrap/0.log" Nov 25 16:10:10 crc kubenswrapper[4731]: I1125 16:10:10.813372 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:10:10 crc kubenswrapper[4731]: E1125 16:10:10.815024 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:10:10 crc kubenswrapper[4731]: I1125 16:10:10.882735 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_600ccfb1-2789-4b20-8012-9b6bdbf836ff/galera/0.log" Nov 25 16:10:10 crc kubenswrapper[4731]: I1125 16:10:10.996043 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_600ccfb1-2789-4b20-8012-9b6bdbf836ff/mysql-bootstrap/0.log" Nov 25 16:10:11 crc kubenswrapper[4731]: I1125 16:10:11.115851 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6c32b705-7078-46a9-9858-16932a2b2675/mysql-bootstrap/0.log" Nov 25 16:10:11 crc kubenswrapper[4731]: I1125 16:10:11.339915 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6c32b705-7078-46a9-9858-16932a2b2675/mysql-bootstrap/0.log" Nov 25 16:10:11 crc kubenswrapper[4731]: I1125 16:10:11.393011 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6c32b705-7078-46a9-9858-16932a2b2675/galera/0.log" Nov 25 16:10:11 crc kubenswrapper[4731]: I1125 16:10:11.554423 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_37c1e8ac-9952-41e7-9572-b2112cbd8d28/openstackclient/0.log" Nov 25 16:10:11 crc kubenswrapper[4731]: I1125 16:10:11.688744 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-crvzg_5881193a-868d-4633-ae6d-0428aa755057/ovn-controller/0.log" Nov 25 16:10:11 crc kubenswrapper[4731]: I1125 16:10:11.813832 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_209fb0e3-5c2d-4e1d-b735-432bc80aa947/nova-metadata-metadata/0.log" Nov 25 16:10:11 crc kubenswrapper[4731]: I1125 16:10:11.851968 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-hvl9d_e1f1fddc-9c04-4e31-b6c4-528d39e050a8/openstack-network-exporter/0.log" Nov 25 16:10:12 crc kubenswrapper[4731]: I1125 16:10:12.087451 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w2gw5_0b83f089-05d3-4362-b02c-ba4293807c86/ovsdb-server-init/0.log" Nov 25 16:10:12 crc kubenswrapper[4731]: I1125 16:10:12.369041 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w2gw5_0b83f089-05d3-4362-b02c-ba4293807c86/ovsdb-server/0.log" Nov 25 16:10:12 crc kubenswrapper[4731]: I1125 16:10:12.403118 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w2gw5_0b83f089-05d3-4362-b02c-ba4293807c86/ovsdb-server-init/0.log" Nov 25 16:10:12 crc kubenswrapper[4731]: I1125 16:10:12.404300 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-w2gw5_0b83f089-05d3-4362-b02c-ba4293807c86/ovs-vswitchd/0.log" Nov 25 16:10:12 crc kubenswrapper[4731]: I1125 16:10:12.590772 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-lks4s_0b01dd33-8188-45e4-a53a-c01ea9df00f7/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:10:12 crc kubenswrapper[4731]: I1125 16:10:12.684371 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_dc7742e1-0cb7-4119-be98-f544587f3687/openstack-network-exporter/0.log" Nov 25 16:10:12 crc kubenswrapper[4731]: I1125 16:10:12.768490 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_dc7742e1-0cb7-4119-be98-f544587f3687/ovn-northd/0.log" Nov 25 16:10:12 crc kubenswrapper[4731]: I1125 16:10:12.953545 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fecb6adf-6993-4a45-a85a-4fa15a484be1/openstack-network-exporter/0.log" Nov 25 16:10:12 crc kubenswrapper[4731]: I1125 16:10:12.976709 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_fecb6adf-6993-4a45-a85a-4fa15a484be1/ovsdbserver-nb/0.log" Nov 25 16:10:13 crc kubenswrapper[4731]: I1125 16:10:13.128109 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1b455e72-bfa2-4a6f-9b54-8424af4e4c53/openstack-network-exporter/0.log" Nov 25 16:10:13 crc kubenswrapper[4731]: I1125 16:10:13.263599 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_1b455e72-bfa2-4a6f-9b54-8424af4e4c53/ovsdbserver-sb/0.log" Nov 25 16:10:13 crc kubenswrapper[4731]: I1125 16:10:13.348621 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-ccd9b5c94-l8cl7_3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b/placement-api/0.log" Nov 25 16:10:13 crc kubenswrapper[4731]: I1125 16:10:13.433083 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-ccd9b5c94-l8cl7_3bdd8e6a-83bc-46d1-9e78-9c4fa45b4b3b/placement-log/0.log" Nov 25 16:10:13 crc kubenswrapper[4731]: I1125 16:10:13.464852 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_053ea4bb-51e8-4900-b8ad-fae14168a43f/init-config-reloader/0.log" Nov 25 16:10:13 crc kubenswrapper[4731]: I1125 16:10:13.801502 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_053ea4bb-51e8-4900-b8ad-fae14168a43f/init-config-reloader/0.log" Nov 25 16:10:13 crc kubenswrapper[4731]: I1125 16:10:13.812133 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_053ea4bb-51e8-4900-b8ad-fae14168a43f/thanos-sidecar/0.log" Nov 25 16:10:13 crc kubenswrapper[4731]: I1125 16:10:13.813603 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_053ea4bb-51e8-4900-b8ad-fae14168a43f/prometheus/0.log" Nov 25 16:10:13 crc kubenswrapper[4731]: I1125 16:10:13.823736 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_053ea4bb-51e8-4900-b8ad-fae14168a43f/config-reloader/0.log" Nov 25 16:10:14 crc kubenswrapper[4731]: I1125 16:10:14.008837 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9902e27-284b-4015-994b-9b27f3254c5d/setup-container/0.log" Nov 25 16:10:14 crc kubenswrapper[4731]: I1125 16:10:14.299380 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9902e27-284b-4015-994b-9b27f3254c5d/setup-container/0.log" Nov 25 16:10:14 crc kubenswrapper[4731]: I1125 16:10:14.341183 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_c9902e27-284b-4015-994b-9b27f3254c5d/rabbitmq/0.log" Nov 25 16:10:14 crc kubenswrapper[4731]: I1125 16:10:14.345008 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_abe2a548-008b-4d3d-b0d1-d6e62dc15a4d/setup-container/0.log" Nov 25 16:10:14 crc kubenswrapper[4731]: I1125 16:10:14.579932 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_abe2a548-008b-4d3d-b0d1-d6e62dc15a4d/setup-container/0.log" Nov 25 16:10:14 crc kubenswrapper[4731]: I1125 16:10:14.652927 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_abe2a548-008b-4d3d-b0d1-d6e62dc15a4d/rabbitmq/0.log" Nov 25 16:10:14 crc kubenswrapper[4731]: I1125 16:10:14.798699 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-pjdlh_4eb1672c-88da-4d49-a0dc-6ddaabc48284/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:10:15 crc kubenswrapper[4731]: I1125 16:10:15.512211 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-h9bbc_4e21c7c5-3531-4470-b2d5-a57c748b4add/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:10:15 crc kubenswrapper[4731]: I1125 16:10:15.538173 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-nwdmq_35e801a7-1cfb-402e-9194-9e25042d0d85/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:10:15 crc kubenswrapper[4731]: I1125 16:10:15.742654 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-bqnjx_ecbbc5f3-fd96-43bb-9c4a-ccf7584f2543/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:10:15 crc kubenswrapper[4731]: I1125 16:10:15.839999 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-sgtgr_b6f2222b-0367-4b65-8481-bccefb346462/ssh-known-hosts-edpm-deployment/0.log" Nov 25 16:10:16 crc kubenswrapper[4731]: I1125 16:10:16.105339 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85b87f8dc-297hh_5a662040-fd64-4441-86b9-9fc355a75142/proxy-server/0.log" Nov 25 16:10:16 crc kubenswrapper[4731]: I1125 16:10:16.148268 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-85b87f8dc-297hh_5a662040-fd64-4441-86b9-9fc355a75142/proxy-httpd/0.log" Nov 25 16:10:16 crc kubenswrapper[4731]: I1125 16:10:16.211256 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-br7gv_6528b54d-5bbd-4f24-9041-e1dd5fef70c7/swift-ring-rebalance/0.log" Nov 25 16:10:16 crc kubenswrapper[4731]: I1125 16:10:16.294445 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/account-auditor/0.log" Nov 25 16:10:16 crc kubenswrapper[4731]: I1125 16:10:16.350388 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/account-reaper/0.log" Nov 25 16:10:16 crc kubenswrapper[4731]: I1125 16:10:16.546147 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/account-replicator/0.log" Nov 25 16:10:16 crc kubenswrapper[4731]: I1125 16:10:16.552148 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/account-server/0.log" Nov 25 16:10:16 crc kubenswrapper[4731]: I1125 16:10:16.605783 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/container-auditor/0.log" Nov 25 16:10:16 crc kubenswrapper[4731]: I1125 16:10:16.670205 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/container-replicator/0.log" Nov 25 16:10:16 crc kubenswrapper[4731]: I1125 16:10:16.789326 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/container-updater/0.log" Nov 25 16:10:16 crc kubenswrapper[4731]: I1125 16:10:16.798366 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/container-server/0.log" Nov 25 16:10:16 crc kubenswrapper[4731]: I1125 16:10:16.823968 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/object-auditor/0.log" Nov 25 16:10:17 crc kubenswrapper[4731]: I1125 16:10:17.385401 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/object-replicator/0.log" Nov 25 16:10:17 crc kubenswrapper[4731]: I1125 16:10:17.387772 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/object-expirer/0.log" Nov 25 16:10:17 crc kubenswrapper[4731]: I1125 16:10:17.448339 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/object-updater/0.log" Nov 25 16:10:17 crc kubenswrapper[4731]: I1125 16:10:17.461840 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/object-server/0.log" Nov 25 16:10:17 crc kubenswrapper[4731]: I1125 16:10:17.619939 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/rsync/0.log" Nov 25 16:10:17 crc kubenswrapper[4731]: I1125 16:10:17.644913 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_7e07b944-1135-4cda-ae5f-6c387117d5bd/swift-recon-cron/0.log" Nov 25 16:10:17 crc kubenswrapper[4731]: I1125 16:10:17.739592 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-722hf_4fbd03b7-1cca-4719-ae8b-e532f0d61c9e/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:10:17 crc kubenswrapper[4731]: I1125 16:10:17.904197 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-jtzmp_15424794-1bec-4dde-9c0c-7828167a0bd7/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Nov 25 16:10:24 crc kubenswrapper[4731]: I1125 16:10:24.811967 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:10:24 crc kubenswrapper[4731]: E1125 16:10:24.812901 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:10:29 crc kubenswrapper[4731]: I1125 16:10:29.340176 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_29e42be3-60a3-434a-92b2-6692d4316991/memcached/0.log" Nov 25 16:10:37 crc kubenswrapper[4731]: I1125 16:10:37.812037 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:10:37 crc kubenswrapper[4731]: E1125 16:10:37.812765 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:10:47 crc kubenswrapper[4731]: I1125 16:10:47.382180 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85_3958d450-0a9a-4191-b0e0-5989df2da551/util/0.log" Nov 25 16:10:47 crc kubenswrapper[4731]: I1125 16:10:47.883501 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85_3958d450-0a9a-4191-b0e0-5989df2da551/pull/0.log" Nov 25 16:10:47 crc kubenswrapper[4731]: I1125 16:10:47.902250 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85_3958d450-0a9a-4191-b0e0-5989df2da551/util/0.log" Nov 25 16:10:47 crc kubenswrapper[4731]: I1125 16:10:47.942193 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85_3958d450-0a9a-4191-b0e0-5989df2da551/pull/0.log" Nov 25 16:10:48 crc kubenswrapper[4731]: I1125 16:10:48.144959 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85_3958d450-0a9a-4191-b0e0-5989df2da551/pull/0.log" Nov 25 16:10:48 crc kubenswrapper[4731]: I1125 16:10:48.149525 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85_3958d450-0a9a-4191-b0e0-5989df2da551/extract/0.log" Nov 25 16:10:48 crc kubenswrapper[4731]: I1125 16:10:48.157100 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_9c330481b548a3e2022878a91496c44b66d1e68c2586d43f05522a3156lqc85_3958d450-0a9a-4191-b0e0-5989df2da551/util/0.log" Nov 25 16:10:48 crc kubenswrapper[4731]: I1125 16:10:48.380370 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-vldj4_a706f3bc-21c7-416e-b7cb-f0886804bc08/kube-rbac-proxy/0.log" Nov 25 16:10:48 crc kubenswrapper[4731]: I1125 16:10:48.456161 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-86dc4d89c8-vldj4_a706f3bc-21c7-416e-b7cb-f0886804bc08/manager/0.log" Nov 25 16:10:48 crc kubenswrapper[4731]: I1125 16:10:48.457231 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-8w79f_92f78ba4-d5a9-43f8-bb6d-7d226c301ca4/kube-rbac-proxy/0.log" Nov 25 16:10:48 crc kubenswrapper[4731]: I1125 16:10:48.659864 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-79856dc55c-8w79f_92f78ba4-d5a9-43f8-bb6d-7d226c301ca4/manager/0.log" Nov 25 16:10:48 crc kubenswrapper[4731]: I1125 16:10:48.680368 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-hxwn7_f5180254-3122-4550-b259-f870f4575073/kube-rbac-proxy/0.log" Nov 25 16:10:48 crc kubenswrapper[4731]: I1125 16:10:48.697073 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-7d695c9b56-hxwn7_f5180254-3122-4550-b259-f870f4575073/manager/0.log" Nov 25 16:10:48 crc kubenswrapper[4731]: I1125 16:10:48.864379 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-r52fx_8212bdfe-bb4a-4865-9ef9-ba58232188f7/kube-rbac-proxy/0.log" Nov 25 16:10:48 crc kubenswrapper[4731]: I1125 16:10:48.985289 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-68b95954c9-r52fx_8212bdfe-bb4a-4865-9ef9-ba58232188f7/manager/0.log" Nov 25 16:10:49 crc kubenswrapper[4731]: I1125 16:10:49.090687 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-kxfnt_ab316c81-bc21-47b6-8dbd-bd6c26cc2769/kube-rbac-proxy/0.log" Nov 25 16:10:49 crc kubenswrapper[4731]: I1125 16:10:49.161063 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-774b86978c-kxfnt_ab316c81-bc21-47b6-8dbd-bd6c26cc2769/manager/0.log" Nov 25 16:10:49 crc kubenswrapper[4731]: I1125 16:10:49.683247 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-kn45z_e4f0ff64-5199-4a19-8657-08ff58d915da/manager/0.log" Nov 25 16:10:49 crc kubenswrapper[4731]: I1125 16:10:49.686176 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c9694994-kn45z_e4f0ff64-5199-4a19-8657-08ff58d915da/kube-rbac-proxy/0.log" Nov 25 16:10:49 crc kubenswrapper[4731]: I1125 16:10:49.853079 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-2sf58_d195c5e8-4c33-44ad-ba19-deadfc6e21ef/kube-rbac-proxy/0.log" Nov 25 16:10:50 crc kubenswrapper[4731]: I1125 16:10:50.042640 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-sh9ng_fa757060-f39f-4482-8fcb-41ae41ef9cb9/kube-rbac-proxy/0.log" Nov 25 16:10:50 crc kubenswrapper[4731]: I1125 16:10:50.057590 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-d5cc86f4b-2sf58_d195c5e8-4c33-44ad-ba19-deadfc6e21ef/manager/0.log" Nov 25 16:10:50 crc kubenswrapper[4731]: I1125 16:10:50.138561 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5bfcdc958c-sh9ng_fa757060-f39f-4482-8fcb-41ae41ef9cb9/manager/0.log" Nov 25 16:10:50 crc kubenswrapper[4731]: I1125 16:10:50.270221 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-h4whj_83e6ff4c-b0f0-4d61-8923-31c0d3f1398f/kube-rbac-proxy/0.log" Nov 25 16:10:50 crc kubenswrapper[4731]: I1125 16:10:50.332995 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-748dc6576f-h4whj_83e6ff4c-b0f0-4d61-8923-31c0d3f1398f/manager/0.log" Nov 25 16:10:50 crc kubenswrapper[4731]: I1125 16:10:50.421294 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-2npjw_1af6726c-a2ca-43a1-b932-cddcd28457d1/kube-rbac-proxy/0.log" Nov 25 16:10:50 crc kubenswrapper[4731]: I1125 16:10:50.519164 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-58bb8d67cc-2npjw_1af6726c-a2ca-43a1-b932-cddcd28457d1/manager/0.log" Nov 25 16:10:50 crc kubenswrapper[4731]: I1125 16:10:50.613585 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-7kltb_01518faf-8637-4852-b877-b5c8ca2c7d91/kube-rbac-proxy/0.log" Nov 25 16:10:50 crc kubenswrapper[4731]: I1125 16:10:50.661673 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-cb6c4fdb7-7kltb_01518faf-8637-4852-b877-b5c8ca2c7d91/manager/0.log" Nov 25 16:10:50 crc kubenswrapper[4731]: I1125 16:10:50.786263 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-j2wt5_f5ea628e-44a7-499c-a433-14d18a9bb804/kube-rbac-proxy/0.log" Nov 25 16:10:50 crc kubenswrapper[4731]: I1125 16:10:50.868496 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-7c57c8bbc4-j2wt5_f5ea628e-44a7-499c-a433-14d18a9bb804/manager/0.log" Nov 25 16:10:50 crc kubenswrapper[4731]: I1125 16:10:50.985152 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-qjrbz_39362f78-55ff-4380-80b3-c46e2d004749/kube-rbac-proxy/0.log" Nov 25 16:10:51 crc kubenswrapper[4731]: I1125 16:10:51.100773 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-bdwql_1eedb0ec-4f2e-43da-8e15-aae0436e942e/kube-rbac-proxy/0.log" Nov 25 16:10:51 crc kubenswrapper[4731]: I1125 16:10:51.125871 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79556f57fc-qjrbz_39362f78-55ff-4380-80b3-c46e2d004749/manager/0.log" Nov 25 16:10:51 crc kubenswrapper[4731]: I1125 16:10:51.249280 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-fd75fd47d-bdwql_1eedb0ec-4f2e-43da-8e15-aae0436e942e/manager/0.log" Nov 25 16:10:51 crc kubenswrapper[4731]: I1125 16:10:51.389184 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-kctp7_1bdef4b3-96db-42a5-aefd-2641cced3d1e/kube-rbac-proxy/0.log" Nov 25 16:10:51 crc kubenswrapper[4731]: I1125 16:10:51.405613 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-544b9bb9-kctp7_1bdef4b3-96db-42a5-aefd-2641cced3d1e/manager/0.log" Nov 25 16:10:51 crc kubenswrapper[4731]: I1125 16:10:51.784218 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-6k7ns_f573d017-983d-413d-adea-34617a278460/registry-server/0.log" Nov 25 16:10:51 crc kubenswrapper[4731]: I1125 16:10:51.895665 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-f546cbb6c-wz8k8_c2b163b6-05e5-4c6e-95f9-2748e85eb037/operator/0.log" Nov 25 16:10:52 crc kubenswrapper[4731]: I1125 16:10:52.020055 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-wmjzm_77bbfdb5-d5b4-4512-ac18-e3d787eea6f2/kube-rbac-proxy/0.log" Nov 25 16:10:52 crc kubenswrapper[4731]: I1125 16:10:52.145355 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-66cf5c67ff-wmjzm_77bbfdb5-d5b4-4512-ac18-e3d787eea6f2/manager/0.log" Nov 25 16:10:52 crc kubenswrapper[4731]: I1125 16:10:52.247014 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-8m7t9_61221a67-95aa-4e96-a41c-4d9343305157/kube-rbac-proxy/0.log" Nov 25 16:10:52 crc kubenswrapper[4731]: I1125 16:10:52.347032 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5db546f9d9-8m7t9_61221a67-95aa-4e96-a41c-4d9343305157/manager/0.log" Nov 25 16:10:52 crc kubenswrapper[4731]: I1125 16:10:52.481487 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-6tddn_2aeac7ab-78a9-49f2-9640-0c91b73bbe57/operator/0.log" Nov 25 16:10:52 crc kubenswrapper[4731]: I1125 16:10:52.644205 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-ft6fb_bc21926a-c3cf-49fa-addf-01d3156c1b5f/kube-rbac-proxy/0.log" Nov 25 16:10:52 crc kubenswrapper[4731]: I1125 16:10:52.745003 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6fdc4fcf86-ft6fb_bc21926a-c3cf-49fa-addf-01d3156c1b5f/manager/0.log" Nov 25 16:10:52 crc kubenswrapper[4731]: I1125 16:10:52.818046 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:10:52 crc kubenswrapper[4731]: E1125 16:10:52.818290 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:10:52 crc kubenswrapper[4731]: I1125 16:10:52.851075 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-75cd8d4b6c-kkqgj_089e8703-f71e-4f81-91af-19224ba7b1b4/kube-rbac-proxy/0.log" Nov 25 16:10:53 crc kubenswrapper[4731]: I1125 16:10:53.075164 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-9w7kw_54f47a6e-c9be-49a3-89af-43ecbd6b7b93/kube-rbac-proxy/0.log" Nov 25 16:10:53 crc kubenswrapper[4731]: I1125 16:10:53.149200 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5cb74df96-9w7kw_54f47a6e-c9be-49a3-89af-43ecbd6b7b93/manager/0.log" Nov 25 16:10:53 crc kubenswrapper[4731]: I1125 16:10:53.229921 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-75cd8d4b6c-kkqgj_089e8703-f71e-4f81-91af-19224ba7b1b4/manager/0.log" Nov 25 16:10:53 crc kubenswrapper[4731]: I1125 16:10:53.292439 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-sk2wl_158dbcc7-a9af-464b-9886-4f5f6f4bbf71/kube-rbac-proxy/0.log" Nov 25 16:10:53 crc kubenswrapper[4731]: I1125 16:10:53.393937 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-77d84b6b6b-mdwdb_7d6095a8-a478-472d-b84b-c99877ddcdf1/manager/0.log" Nov 25 16:10:53 crc kubenswrapper[4731]: I1125 16:10:53.400264 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-864885998-sk2wl_158dbcc7-a9af-464b-9886-4f5f6f4bbf71/manager/0.log" Nov 25 16:10:55 crc kubenswrapper[4731]: I1125 16:10:55.539515 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jfjh7"] Nov 25 16:10:55 crc kubenswrapper[4731]: E1125 16:10:55.541555 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cee542f9-2506-408e-b0fc-9e72193bacf5" containerName="container-00" Nov 25 16:10:55 crc kubenswrapper[4731]: I1125 16:10:55.541682 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="cee542f9-2506-408e-b0fc-9e72193bacf5" containerName="container-00" Nov 25 16:10:55 crc kubenswrapper[4731]: I1125 16:10:55.542080 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="cee542f9-2506-408e-b0fc-9e72193bacf5" containerName="container-00" Nov 25 16:10:55 crc kubenswrapper[4731]: I1125 16:10:55.544138 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:10:55 crc kubenswrapper[4731]: I1125 16:10:55.559633 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jfjh7"] Nov 25 16:10:55 crc kubenswrapper[4731]: I1125 16:10:55.595209 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwsb7\" (UniqueName: \"kubernetes.io/projected/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-kube-api-access-nwsb7\") pod \"redhat-operators-jfjh7\" (UID: \"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb\") " pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:10:55 crc kubenswrapper[4731]: I1125 16:10:55.595629 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-utilities\") pod \"redhat-operators-jfjh7\" (UID: \"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb\") " pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:10:55 crc kubenswrapper[4731]: I1125 16:10:55.595691 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-catalog-content\") pod \"redhat-operators-jfjh7\" (UID: \"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb\") " pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:10:55 crc kubenswrapper[4731]: I1125 16:10:55.697058 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-utilities\") pod \"redhat-operators-jfjh7\" (UID: \"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb\") " pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:10:55 crc kubenswrapper[4731]: I1125 16:10:55.697118 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-catalog-content\") pod \"redhat-operators-jfjh7\" (UID: \"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb\") " pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:10:55 crc kubenswrapper[4731]: I1125 16:10:55.697216 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwsb7\" (UniqueName: \"kubernetes.io/projected/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-kube-api-access-nwsb7\") pod \"redhat-operators-jfjh7\" (UID: \"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb\") " pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:10:55 crc kubenswrapper[4731]: I1125 16:10:55.697645 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-utilities\") pod \"redhat-operators-jfjh7\" (UID: \"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb\") " pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:10:55 crc kubenswrapper[4731]: I1125 16:10:55.697733 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-catalog-content\") pod \"redhat-operators-jfjh7\" (UID: \"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb\") " pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:10:55 crc kubenswrapper[4731]: I1125 16:10:55.723998 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwsb7\" (UniqueName: \"kubernetes.io/projected/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-kube-api-access-nwsb7\") pod \"redhat-operators-jfjh7\" (UID: \"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb\") " pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:10:55 crc kubenswrapper[4731]: I1125 16:10:55.883485 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:10:56 crc kubenswrapper[4731]: I1125 16:10:56.470718 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jfjh7"] Nov 25 16:10:56 crc kubenswrapper[4731]: I1125 16:10:56.709020 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfjh7" event={"ID":"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb","Type":"ContainerStarted","Data":"ae9923100ac158c0747d097634eb47c8e1b76340a8fb881004a430388aa5f654"} Nov 25 16:10:56 crc kubenswrapper[4731]: I1125 16:10:56.709372 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfjh7" event={"ID":"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb","Type":"ContainerStarted","Data":"de88e7e44347d8230f9867d0793a2655df43990774543bb7ada252dc71dcfa3d"} Nov 25 16:10:57 crc kubenswrapper[4731]: I1125 16:10:57.722295 4731 generic.go:334] "Generic (PLEG): container finished" podID="bff5e8a7-8a7d-4f22-baf6-caed23e66ffb" containerID="ae9923100ac158c0747d097634eb47c8e1b76340a8fb881004a430388aa5f654" exitCode=0 Nov 25 16:10:57 crc kubenswrapper[4731]: I1125 16:10:57.722402 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfjh7" event={"ID":"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb","Type":"ContainerDied","Data":"ae9923100ac158c0747d097634eb47c8e1b76340a8fb881004a430388aa5f654"} Nov 25 16:10:59 crc kubenswrapper[4731]: I1125 16:10:59.743341 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfjh7" event={"ID":"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb","Type":"ContainerStarted","Data":"176b66f15b2d0dabe315dac542ef9a8ed13768ffc1ef3d2175b1e73711a44a71"} Nov 25 16:11:04 crc kubenswrapper[4731]: I1125 16:11:04.812487 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:11:04 crc kubenswrapper[4731]: E1125 16:11:04.814086 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:11:12 crc kubenswrapper[4731]: I1125 16:11:12.728173 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-qpv96_013533b2-fe47-4b12-9ecf-98705944a1eb/control-plane-machine-set-operator/0.log" Nov 25 16:11:12 crc kubenswrapper[4731]: I1125 16:11:12.903354 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vpcm5_740209e9-0d7f-4da0-b68f-2ef5628039df/machine-api-operator/0.log" Nov 25 16:11:12 crc kubenswrapper[4731]: I1125 16:11:12.915025 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vpcm5_740209e9-0d7f-4da0-b68f-2ef5628039df/kube-rbac-proxy/0.log" Nov 25 16:11:13 crc kubenswrapper[4731]: I1125 16:11:13.904620 4731 generic.go:334] "Generic (PLEG): container finished" podID="bff5e8a7-8a7d-4f22-baf6-caed23e66ffb" containerID="176b66f15b2d0dabe315dac542ef9a8ed13768ffc1ef3d2175b1e73711a44a71" exitCode=0 Nov 25 16:11:13 crc kubenswrapper[4731]: I1125 16:11:13.904713 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfjh7" event={"ID":"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb","Type":"ContainerDied","Data":"176b66f15b2d0dabe315dac542ef9a8ed13768ffc1ef3d2175b1e73711a44a71"} Nov 25 16:11:19 crc kubenswrapper[4731]: I1125 16:11:19.811858 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:11:19 crc kubenswrapper[4731]: E1125 16:11:19.815544 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:11:19 crc kubenswrapper[4731]: I1125 16:11:19.957193 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfjh7" event={"ID":"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb","Type":"ContainerStarted","Data":"96d059eb0d0ca0e5308c6268473dbba7dbad251ec1fe4a3620c722c58dab2ec9"} Nov 25 16:11:19 crc kubenswrapper[4731]: I1125 16:11:19.978792 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jfjh7" podStartSLOduration=4.289581456 podStartE2EDuration="24.97876998s" podCreationTimestamp="2025-11-25 16:10:55 +0000 UTC" firstStartedPulling="2025-11-25 16:10:57.725408678 +0000 UTC m=+4616.472543108" lastFinishedPulling="2025-11-25 16:11:18.414597202 +0000 UTC m=+4637.161731632" observedRunningTime="2025-11-25 16:11:19.973450527 +0000 UTC m=+4638.720584977" watchObservedRunningTime="2025-11-25 16:11:19.97876998 +0000 UTC m=+4638.725904410" Nov 25 16:11:25 crc kubenswrapper[4731]: I1125 16:11:25.884605 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:11:25 crc kubenswrapper[4731]: I1125 16:11:25.885260 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:11:25 crc kubenswrapper[4731]: I1125 16:11:25.936226 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:11:26 crc kubenswrapper[4731]: I1125 16:11:26.088482 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:11:26 crc kubenswrapper[4731]: I1125 16:11:26.149202 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-7bwk9_4d05d1b3-fbd6-4efc-83a7-b446881f41a1/cert-manager-controller/0.log" Nov 25 16:11:26 crc kubenswrapper[4731]: I1125 16:11:26.293166 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-z8qcw_e6f08a04-49da-431f-8ad5-2669b0395a83/cert-manager-cainjector/0.log" Nov 25 16:11:26 crc kubenswrapper[4731]: I1125 16:11:26.413471 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-s2z26_b8915b48-72ed-4372-bfbf-7f933af258e0/cert-manager-webhook/0.log" Nov 25 16:11:26 crc kubenswrapper[4731]: I1125 16:11:26.729140 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jfjh7"] Nov 25 16:11:28 crc kubenswrapper[4731]: I1125 16:11:28.057679 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jfjh7" podUID="bff5e8a7-8a7d-4f22-baf6-caed23e66ffb" containerName="registry-server" containerID="cri-o://96d059eb0d0ca0e5308c6268473dbba7dbad251ec1fe4a3620c722c58dab2ec9" gracePeriod=2 Nov 25 16:11:29 crc kubenswrapper[4731]: I1125 16:11:29.089408 4731 generic.go:334] "Generic (PLEG): container finished" podID="bff5e8a7-8a7d-4f22-baf6-caed23e66ffb" containerID="96d059eb0d0ca0e5308c6268473dbba7dbad251ec1fe4a3620c722c58dab2ec9" exitCode=0 Nov 25 16:11:29 crc kubenswrapper[4731]: I1125 16:11:29.089516 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfjh7" event={"ID":"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb","Type":"ContainerDied","Data":"96d059eb0d0ca0e5308c6268473dbba7dbad251ec1fe4a3620c722c58dab2ec9"} Nov 25 16:11:29 crc kubenswrapper[4731]: I1125 16:11:29.482798 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:11:29 crc kubenswrapper[4731]: I1125 16:11:29.614219 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-utilities\") pod \"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb\" (UID: \"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb\") " Nov 25 16:11:29 crc kubenswrapper[4731]: I1125 16:11:29.614290 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-catalog-content\") pod \"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb\" (UID: \"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb\") " Nov 25 16:11:29 crc kubenswrapper[4731]: I1125 16:11:29.614320 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwsb7\" (UniqueName: \"kubernetes.io/projected/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-kube-api-access-nwsb7\") pod \"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb\" (UID: \"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb\") " Nov 25 16:11:29 crc kubenswrapper[4731]: I1125 16:11:29.615318 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-utilities" (OuterVolumeSpecName: "utilities") pod "bff5e8a7-8a7d-4f22-baf6-caed23e66ffb" (UID: "bff5e8a7-8a7d-4f22-baf6-caed23e66ffb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:11:29 crc kubenswrapper[4731]: I1125 16:11:29.620127 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-kube-api-access-nwsb7" (OuterVolumeSpecName: "kube-api-access-nwsb7") pod "bff5e8a7-8a7d-4f22-baf6-caed23e66ffb" (UID: "bff5e8a7-8a7d-4f22-baf6-caed23e66ffb"). InnerVolumeSpecName "kube-api-access-nwsb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:11:29 crc kubenswrapper[4731]: I1125 16:11:29.716645 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 16:11:29 crc kubenswrapper[4731]: I1125 16:11:29.716687 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwsb7\" (UniqueName: \"kubernetes.io/projected/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-kube-api-access-nwsb7\") on node \"crc\" DevicePath \"\"" Nov 25 16:11:29 crc kubenswrapper[4731]: I1125 16:11:29.740975 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bff5e8a7-8a7d-4f22-baf6-caed23e66ffb" (UID: "bff5e8a7-8a7d-4f22-baf6-caed23e66ffb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:11:29 crc kubenswrapper[4731]: I1125 16:11:29.819155 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 16:11:30 crc kubenswrapper[4731]: I1125 16:11:30.103102 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jfjh7" event={"ID":"bff5e8a7-8a7d-4f22-baf6-caed23e66ffb","Type":"ContainerDied","Data":"de88e7e44347d8230f9867d0793a2655df43990774543bb7ada252dc71dcfa3d"} Nov 25 16:11:30 crc kubenswrapper[4731]: I1125 16:11:30.103159 4731 scope.go:117] "RemoveContainer" containerID="96d059eb0d0ca0e5308c6268473dbba7dbad251ec1fe4a3620c722c58dab2ec9" Nov 25 16:11:30 crc kubenswrapper[4731]: I1125 16:11:30.103161 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jfjh7" Nov 25 16:11:30 crc kubenswrapper[4731]: I1125 16:11:30.138521 4731 scope.go:117] "RemoveContainer" containerID="176b66f15b2d0dabe315dac542ef9a8ed13768ffc1ef3d2175b1e73711a44a71" Nov 25 16:11:30 crc kubenswrapper[4731]: I1125 16:11:30.146979 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jfjh7"] Nov 25 16:11:30 crc kubenswrapper[4731]: I1125 16:11:30.155755 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jfjh7"] Nov 25 16:11:30 crc kubenswrapper[4731]: I1125 16:11:30.168913 4731 scope.go:117] "RemoveContainer" containerID="ae9923100ac158c0747d097634eb47c8e1b76340a8fb881004a430388aa5f654" Nov 25 16:11:30 crc kubenswrapper[4731]: I1125 16:11:30.821207 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bff5e8a7-8a7d-4f22-baf6-caed23e66ffb" path="/var/lib/kubelet/pods/bff5e8a7-8a7d-4f22-baf6-caed23e66ffb/volumes" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.274114 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hv4c6"] Nov 25 16:11:34 crc kubenswrapper[4731]: E1125 16:11:34.275797 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff5e8a7-8a7d-4f22-baf6-caed23e66ffb" containerName="extract-utilities" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.275832 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff5e8a7-8a7d-4f22-baf6-caed23e66ffb" containerName="extract-utilities" Nov 25 16:11:34 crc kubenswrapper[4731]: E1125 16:11:34.275860 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff5e8a7-8a7d-4f22-baf6-caed23e66ffb" containerName="extract-content" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.275868 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff5e8a7-8a7d-4f22-baf6-caed23e66ffb" containerName="extract-content" Nov 25 16:11:34 crc kubenswrapper[4731]: E1125 16:11:34.275897 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bff5e8a7-8a7d-4f22-baf6-caed23e66ffb" containerName="registry-server" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.275904 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="bff5e8a7-8a7d-4f22-baf6-caed23e66ffb" containerName="registry-server" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.276165 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="bff5e8a7-8a7d-4f22-baf6-caed23e66ffb" containerName="registry-server" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.278278 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.287778 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hv4c6"] Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.419345 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-utilities\") pod \"community-operators-hv4c6\" (UID: \"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88\") " pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.419465 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-catalog-content\") pod \"community-operators-hv4c6\" (UID: \"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88\") " pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.419519 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9828\" (UniqueName: \"kubernetes.io/projected/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-kube-api-access-w9828\") pod \"community-operators-hv4c6\" (UID: \"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88\") " pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.522582 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-utilities\") pod \"community-operators-hv4c6\" (UID: \"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88\") " pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.522740 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-catalog-content\") pod \"community-operators-hv4c6\" (UID: \"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88\") " pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.522791 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9828\" (UniqueName: \"kubernetes.io/projected/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-kube-api-access-w9828\") pod \"community-operators-hv4c6\" (UID: \"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88\") " pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.523278 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-utilities\") pod \"community-operators-hv4c6\" (UID: \"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88\") " pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.523345 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-catalog-content\") pod \"community-operators-hv4c6\" (UID: \"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88\") " pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.552310 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9828\" (UniqueName: \"kubernetes.io/projected/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-kube-api-access-w9828\") pod \"community-operators-hv4c6\" (UID: \"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88\") " pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.611790 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:11:34 crc kubenswrapper[4731]: I1125 16:11:34.812266 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:11:34 crc kubenswrapper[4731]: E1125 16:11:34.812573 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:11:35 crc kubenswrapper[4731]: I1125 16:11:35.277951 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hv4c6"] Nov 25 16:11:36 crc kubenswrapper[4731]: I1125 16:11:36.175904 4731 generic.go:334] "Generic (PLEG): container finished" podID="ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88" containerID="5bfed69206dba63275226faac8c70d47ca389fdec86b79cb37c7690ca35f2716" exitCode=0 Nov 25 16:11:36 crc kubenswrapper[4731]: I1125 16:11:36.175940 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hv4c6" event={"ID":"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88","Type":"ContainerDied","Data":"5bfed69206dba63275226faac8c70d47ca389fdec86b79cb37c7690ca35f2716"} Nov 25 16:11:36 crc kubenswrapper[4731]: I1125 16:11:36.176188 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hv4c6" event={"ID":"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88","Type":"ContainerStarted","Data":"077cbd1a467aeffad3a228f240cea3c6453cb03dd599cac2cc444ae4358d2f1c"} Nov 25 16:11:39 crc kubenswrapper[4731]: I1125 16:11:39.653364 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8v6dv"] Nov 25 16:11:39 crc kubenswrapper[4731]: I1125 16:11:39.656551 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:11:39 crc kubenswrapper[4731]: I1125 16:11:39.665903 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8v6dv"] Nov 25 16:11:39 crc kubenswrapper[4731]: I1125 16:11:39.831690 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-utilities\") pod \"certified-operators-8v6dv\" (UID: \"ffaf0b19-6df2-4531-82ab-e6506f7b64c0\") " pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:11:39 crc kubenswrapper[4731]: I1125 16:11:39.832240 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-catalog-content\") pod \"certified-operators-8v6dv\" (UID: \"ffaf0b19-6df2-4531-82ab-e6506f7b64c0\") " pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:11:39 crc kubenswrapper[4731]: I1125 16:11:39.832498 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngs44\" (UniqueName: \"kubernetes.io/projected/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-kube-api-access-ngs44\") pod \"certified-operators-8v6dv\" (UID: \"ffaf0b19-6df2-4531-82ab-e6506f7b64c0\") " pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:11:39 crc kubenswrapper[4731]: I1125 16:11:39.934513 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-catalog-content\") pod \"certified-operators-8v6dv\" (UID: \"ffaf0b19-6df2-4531-82ab-e6506f7b64c0\") " pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:11:39 crc kubenswrapper[4731]: I1125 16:11:39.934809 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngs44\" (UniqueName: \"kubernetes.io/projected/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-kube-api-access-ngs44\") pod \"certified-operators-8v6dv\" (UID: \"ffaf0b19-6df2-4531-82ab-e6506f7b64c0\") " pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:11:39 crc kubenswrapper[4731]: I1125 16:11:39.934915 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-utilities\") pod \"certified-operators-8v6dv\" (UID: \"ffaf0b19-6df2-4531-82ab-e6506f7b64c0\") " pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:11:39 crc kubenswrapper[4731]: I1125 16:11:39.935240 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-catalog-content\") pod \"certified-operators-8v6dv\" (UID: \"ffaf0b19-6df2-4531-82ab-e6506f7b64c0\") " pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:11:39 crc kubenswrapper[4731]: I1125 16:11:39.935355 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-utilities\") pod \"certified-operators-8v6dv\" (UID: \"ffaf0b19-6df2-4531-82ab-e6506f7b64c0\") " pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:11:39 crc kubenswrapper[4731]: I1125 16:11:39.961617 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngs44\" (UniqueName: \"kubernetes.io/projected/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-kube-api-access-ngs44\") pod \"certified-operators-8v6dv\" (UID: \"ffaf0b19-6df2-4531-82ab-e6506f7b64c0\") " pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:11:39 crc kubenswrapper[4731]: I1125 16:11:39.980033 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:11:40 crc kubenswrapper[4731]: I1125 16:11:40.574406 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8v6dv"] Nov 25 16:11:41 crc kubenswrapper[4731]: I1125 16:11:41.087650 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5874bd7bc5-p7zbr_6f5cb68e-43ff-4a3a-9917-b6b8349ddb03/nmstate-console-plugin/0.log" Nov 25 16:11:41 crc kubenswrapper[4731]: I1125 16:11:41.238167 4731 generic.go:334] "Generic (PLEG): container finished" podID="ffaf0b19-6df2-4531-82ab-e6506f7b64c0" containerID="afca5b7d0798f41b0334c8210a868143cbbf5af242c60a33d19d62ac0e1a42b8" exitCode=0 Nov 25 16:11:41 crc kubenswrapper[4731]: I1125 16:11:41.238257 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8v6dv" event={"ID":"ffaf0b19-6df2-4531-82ab-e6506f7b64c0","Type":"ContainerDied","Data":"afca5b7d0798f41b0334c8210a868143cbbf5af242c60a33d19d62ac0e1a42b8"} Nov 25 16:11:41 crc kubenswrapper[4731]: I1125 16:11:41.238581 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8v6dv" event={"ID":"ffaf0b19-6df2-4531-82ab-e6506f7b64c0","Type":"ContainerStarted","Data":"cc16693b11abe6930151fe14a3c9d6b82d855389cdd8e5f384d3d5ce25940d76"} Nov 25 16:11:41 crc kubenswrapper[4731]: I1125 16:11:41.253837 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-58n5h_39a7d4f0-c39b-4650-8ea6-dc34e4cfceb9/nmstate-handler/0.log" Nov 25 16:11:41 crc kubenswrapper[4731]: I1125 16:11:41.318664 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-qjzpd_60a0d26a-381d-43f4-9cd1-e9172996b0e2/kube-rbac-proxy/0.log" Nov 25 16:11:41 crc kubenswrapper[4731]: I1125 16:11:41.357246 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-5dcf9c57c5-qjzpd_60a0d26a-381d-43f4-9cd1-e9172996b0e2/nmstate-metrics/0.log" Nov 25 16:11:41 crc kubenswrapper[4731]: I1125 16:11:41.497491 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-557fdffb88-zbjzj_5a45a5cc-bdc6-4daa-b2fd-509237906c01/nmstate-operator/0.log" Nov 25 16:11:42 crc kubenswrapper[4731]: I1125 16:11:42.034626 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6b89b748d8-rpjkc_c271b7db-3cd3-4045-a954-497f15d9f89d/nmstate-webhook/0.log" Nov 25 16:11:44 crc kubenswrapper[4731]: I1125 16:11:44.266160 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hv4c6" event={"ID":"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88","Type":"ContainerStarted","Data":"ee3a2a84e4a4f60f6d970509264b239eaafedab11790a73db688d22408e28881"} Nov 25 16:11:46 crc kubenswrapper[4731]: I1125 16:11:46.295806 4731 generic.go:334] "Generic (PLEG): container finished" podID="ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88" containerID="ee3a2a84e4a4f60f6d970509264b239eaafedab11790a73db688d22408e28881" exitCode=0 Nov 25 16:11:46 crc kubenswrapper[4731]: I1125 16:11:46.295866 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hv4c6" event={"ID":"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88","Type":"ContainerDied","Data":"ee3a2a84e4a4f60f6d970509264b239eaafedab11790a73db688d22408e28881"} Nov 25 16:11:46 crc kubenswrapper[4731]: I1125 16:11:46.299688 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8v6dv" event={"ID":"ffaf0b19-6df2-4531-82ab-e6506f7b64c0","Type":"ContainerStarted","Data":"69693df01d8c1826e3f60ae3ce57c2b64de63a032baafeadb80eac17e747ec49"} Nov 25 16:11:48 crc kubenswrapper[4731]: I1125 16:11:48.812291 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:11:48 crc kubenswrapper[4731]: E1125 16:11:48.813193 4731 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-f6xlz_openshift-machine-config-operator(f469eb10-0aba-4dba-a237-899e0b8dabca)\"" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" Nov 25 16:11:49 crc kubenswrapper[4731]: I1125 16:11:49.334057 4731 generic.go:334] "Generic (PLEG): container finished" podID="ffaf0b19-6df2-4531-82ab-e6506f7b64c0" containerID="69693df01d8c1826e3f60ae3ce57c2b64de63a032baafeadb80eac17e747ec49" exitCode=0 Nov 25 16:11:49 crc kubenswrapper[4731]: I1125 16:11:49.334109 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8v6dv" event={"ID":"ffaf0b19-6df2-4531-82ab-e6506f7b64c0","Type":"ContainerDied","Data":"69693df01d8c1826e3f60ae3ce57c2b64de63a032baafeadb80eac17e747ec49"} Nov 25 16:11:51 crc kubenswrapper[4731]: I1125 16:11:51.353499 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hv4c6" event={"ID":"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88","Type":"ContainerStarted","Data":"39c9ba54aa3687d52f7c1a47f3c8c566855c3d96b7f667111db1ebfb988814a1"} Nov 25 16:11:52 crc kubenswrapper[4731]: I1125 16:11:52.387433 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hv4c6" podStartSLOduration=4.227445263 podStartE2EDuration="18.387414773s" podCreationTimestamp="2025-11-25 16:11:34 +0000 UTC" firstStartedPulling="2025-11-25 16:11:36.177673487 +0000 UTC m=+4654.924807917" lastFinishedPulling="2025-11-25 16:11:50.337642997 +0000 UTC m=+4669.084777427" observedRunningTime="2025-11-25 16:11:52.382242364 +0000 UTC m=+4671.129376794" watchObservedRunningTime="2025-11-25 16:11:52.387414773 +0000 UTC m=+4671.134549203" Nov 25 16:11:54 crc kubenswrapper[4731]: I1125 16:11:54.612826 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:11:54 crc kubenswrapper[4731]: I1125 16:11:54.613385 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:11:54 crc kubenswrapper[4731]: I1125 16:11:54.663027 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:11:55 crc kubenswrapper[4731]: I1125 16:11:55.281548 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5kfg7_febb59e9-b008-465d-890a-6a6a342cf56b/kube-rbac-proxy/0.log" Nov 25 16:11:55 crc kubenswrapper[4731]: I1125 16:11:55.367434 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6c7b4b5f48-5kfg7_febb59e9-b008-465d-890a-6a6a342cf56b/controller/0.log" Nov 25 16:11:55 crc kubenswrapper[4731]: I1125 16:11:55.438107 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:11:55 crc kubenswrapper[4731]: I1125 16:11:55.503782 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hv4c6"] Nov 25 16:11:55 crc kubenswrapper[4731]: I1125 16:11:55.521737 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-frr-files/0.log" Nov 25 16:11:55 crc kubenswrapper[4731]: I1125 16:11:55.694272 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-frr-files/0.log" Nov 25 16:11:55 crc kubenswrapper[4731]: I1125 16:11:55.730886 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-metrics/0.log" Nov 25 16:11:55 crc kubenswrapper[4731]: I1125 16:11:55.733717 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-reloader/0.log" Nov 25 16:11:55 crc kubenswrapper[4731]: I1125 16:11:55.849942 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-reloader/0.log" Nov 25 16:11:56 crc kubenswrapper[4731]: I1125 16:11:56.115031 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-frr-files/0.log" Nov 25 16:11:56 crc kubenswrapper[4731]: I1125 16:11:56.132528 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-metrics/0.log" Nov 25 16:11:56 crc kubenswrapper[4731]: I1125 16:11:56.133445 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-reloader/0.log" Nov 25 16:11:56 crc kubenswrapper[4731]: I1125 16:11:56.286411 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-metrics/0.log" Nov 25 16:11:56 crc kubenswrapper[4731]: I1125 16:11:56.436661 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-metrics/0.log" Nov 25 16:11:56 crc kubenswrapper[4731]: I1125 16:11:56.443408 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-frr-files/0.log" Nov 25 16:11:56 crc kubenswrapper[4731]: I1125 16:11:56.444435 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/cp-reloader/0.log" Nov 25 16:11:56 crc kubenswrapper[4731]: I1125 16:11:56.472760 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/controller/0.log" Nov 25 16:11:56 crc kubenswrapper[4731]: I1125 16:11:56.637170 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/frr-metrics/0.log" Nov 25 16:11:56 crc kubenswrapper[4731]: I1125 16:11:56.670774 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/kube-rbac-proxy/0.log" Nov 25 16:11:56 crc kubenswrapper[4731]: I1125 16:11:56.699990 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/kube-rbac-proxy-frr/0.log" Nov 25 16:11:56 crc kubenswrapper[4731]: I1125 16:11:56.902055 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/reloader/0.log" Nov 25 16:11:56 crc kubenswrapper[4731]: I1125 16:11:56.937092 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-6998585d5-qxfjt_fe90ff60-e166-4e73-a995-dfd82a45895c/frr-k8s-webhook-server/0.log" Nov 25 16:11:57 crc kubenswrapper[4731]: I1125 16:11:57.198696 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6bc94d877-jvccm_b6957d61-fdfc-49b1-8721-a14991fc51c5/manager/0.log" Nov 25 16:11:57 crc kubenswrapper[4731]: I1125 16:11:57.327538 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-796ff6997-qwkk4_af60cb8b-34de-4af2-a647-84ed628f77f0/webhook-server/0.log" Nov 25 16:11:57 crc kubenswrapper[4731]: I1125 16:11:57.400556 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2jnfr_5df2adbb-3909-4e10-8f11-5953fb961766/kube-rbac-proxy/0.log" Nov 25 16:11:57 crc kubenswrapper[4731]: I1125 16:11:57.405836 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hv4c6" podUID="ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88" containerName="registry-server" containerID="cri-o://39c9ba54aa3687d52f7c1a47f3c8c566855c3d96b7f667111db1ebfb988814a1" gracePeriod=2 Nov 25 16:11:58 crc kubenswrapper[4731]: I1125 16:11:58.001554 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-2jnfr_5df2adbb-3909-4e10-8f11-5953fb961766/speaker/0.log" Nov 25 16:11:58 crc kubenswrapper[4731]: I1125 16:11:58.382736 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-2r6xw_7d337b1a-530a-4c6d-8eec-acd385c4dd65/frr/0.log" Nov 25 16:11:59 crc kubenswrapper[4731]: I1125 16:11:59.428093 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8v6dv" event={"ID":"ffaf0b19-6df2-4531-82ab-e6506f7b64c0","Type":"ContainerStarted","Data":"2aae19a78f82f148db8275c2814219844b8417295cbaa858734c28b9f6f9291f"} Nov 25 16:12:00 crc kubenswrapper[4731]: I1125 16:12:00.460270 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8v6dv" podStartSLOduration=7.30902474 podStartE2EDuration="21.460248241s" podCreationTimestamp="2025-11-25 16:11:39 +0000 UTC" firstStartedPulling="2025-11-25 16:11:42.582916018 +0000 UTC m=+4661.330050448" lastFinishedPulling="2025-11-25 16:11:56.734139519 +0000 UTC m=+4675.481273949" observedRunningTime="2025-11-25 16:12:00.45624234 +0000 UTC m=+4679.203376780" watchObservedRunningTime="2025-11-25 16:12:00.460248241 +0000 UTC m=+4679.207382671" Nov 25 16:12:01 crc kubenswrapper[4731]: I1125 16:12:01.447346 4731 generic.go:334] "Generic (PLEG): container finished" podID="ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88" containerID="39c9ba54aa3687d52f7c1a47f3c8c566855c3d96b7f667111db1ebfb988814a1" exitCode=0 Nov 25 16:12:01 crc kubenswrapper[4731]: I1125 16:12:01.447401 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hv4c6" event={"ID":"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88","Type":"ContainerDied","Data":"39c9ba54aa3687d52f7c1a47f3c8c566855c3d96b7f667111db1ebfb988814a1"} Nov 25 16:12:02 crc kubenswrapper[4731]: I1125 16:12:02.459845 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hv4c6" event={"ID":"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88","Type":"ContainerDied","Data":"077cbd1a467aeffad3a228f240cea3c6453cb03dd599cac2cc444ae4358d2f1c"} Nov 25 16:12:02 crc kubenswrapper[4731]: I1125 16:12:02.460172 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="077cbd1a467aeffad3a228f240cea3c6453cb03dd599cac2cc444ae4358d2f1c" Nov 25 16:12:02 crc kubenswrapper[4731]: I1125 16:12:02.484313 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:12:02 crc kubenswrapper[4731]: I1125 16:12:02.583539 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-catalog-content\") pod \"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88\" (UID: \"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88\") " Nov 25 16:12:02 crc kubenswrapper[4731]: I1125 16:12:02.583829 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-utilities\") pod \"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88\" (UID: \"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88\") " Nov 25 16:12:02 crc kubenswrapper[4731]: I1125 16:12:02.583901 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9828\" (UniqueName: \"kubernetes.io/projected/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-kube-api-access-w9828\") pod \"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88\" (UID: \"ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88\") " Nov 25 16:12:02 crc kubenswrapper[4731]: I1125 16:12:02.585661 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-utilities" (OuterVolumeSpecName: "utilities") pod "ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88" (UID: "ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:12:02 crc kubenswrapper[4731]: I1125 16:12:02.590282 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-kube-api-access-w9828" (OuterVolumeSpecName: "kube-api-access-w9828") pod "ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88" (UID: "ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88"). InnerVolumeSpecName "kube-api-access-w9828". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:12:02 crc kubenswrapper[4731]: I1125 16:12:02.686607 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 16:12:02 crc kubenswrapper[4731]: I1125 16:12:02.686673 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9828\" (UniqueName: \"kubernetes.io/projected/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-kube-api-access-w9828\") on node \"crc\" DevicePath \"\"" Nov 25 16:12:02 crc kubenswrapper[4731]: I1125 16:12:02.822436 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:12:03 crc kubenswrapper[4731]: I1125 16:12:03.466972 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hv4c6" Nov 25 16:12:04 crc kubenswrapper[4731]: I1125 16:12:04.994792 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88" (UID: "ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:12:05 crc kubenswrapper[4731]: I1125 16:12:05.041593 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 16:12:05 crc kubenswrapper[4731]: I1125 16:12:05.299570 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hv4c6"] Nov 25 16:12:05 crc kubenswrapper[4731]: I1125 16:12:05.308203 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hv4c6"] Nov 25 16:12:05 crc kubenswrapper[4731]: I1125 16:12:05.484516 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"24a12702147a9e5f9ee7a48477219fc53372f24d1299b9929cc38a3c6fbe6044"} Nov 25 16:12:06 crc kubenswrapper[4731]: I1125 16:12:06.823712 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88" path="/var/lib/kubelet/pods/ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88/volumes" Nov 25 16:12:09 crc kubenswrapper[4731]: I1125 16:12:09.980584 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:12:09 crc kubenswrapper[4731]: I1125 16:12:09.981382 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:12:10 crc kubenswrapper[4731]: I1125 16:12:10.036249 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:12:10 crc kubenswrapper[4731]: I1125 16:12:10.583100 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:12:10 crc kubenswrapper[4731]: I1125 16:12:10.869155 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8v6dv"] Nov 25 16:12:11 crc kubenswrapper[4731]: I1125 16:12:11.113572 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh_b6d7e2d5-3efb-47ec-9cb5-685467f22976/util/0.log" Nov 25 16:12:11 crc kubenswrapper[4731]: I1125 16:12:11.297363 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh_b6d7e2d5-3efb-47ec-9cb5-685467f22976/util/0.log" Nov 25 16:12:11 crc kubenswrapper[4731]: I1125 16:12:11.324240 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh_b6d7e2d5-3efb-47ec-9cb5-685467f22976/pull/0.log" Nov 25 16:12:11 crc kubenswrapper[4731]: I1125 16:12:11.363439 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh_b6d7e2d5-3efb-47ec-9cb5-685467f22976/pull/0.log" Nov 25 16:12:11 crc kubenswrapper[4731]: I1125 16:12:11.578461 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh_b6d7e2d5-3efb-47ec-9cb5-685467f22976/pull/0.log" Nov 25 16:12:11 crc kubenswrapper[4731]: I1125 16:12:11.594086 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh_b6d7e2d5-3efb-47ec-9cb5-685467f22976/extract/0.log" Nov 25 16:12:11 crc kubenswrapper[4731]: I1125 16:12:11.616587 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5c796334424b8139919e908729ac8fe5c1f6e7b6bc33540f00b4f8772er5dzh_b6d7e2d5-3efb-47ec-9cb5-685467f22976/util/0.log" Nov 25 16:12:11 crc kubenswrapper[4731]: I1125 16:12:11.753737 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj_26eed3e4-1783-4718-b1a2-39ba37a78a12/util/0.log" Nov 25 16:12:11 crc kubenswrapper[4731]: I1125 16:12:11.927870 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj_26eed3e4-1783-4718-b1a2-39ba37a78a12/pull/0.log" Nov 25 16:12:11 crc kubenswrapper[4731]: I1125 16:12:11.964199 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj_26eed3e4-1783-4718-b1a2-39ba37a78a12/pull/0.log" Nov 25 16:12:11 crc kubenswrapper[4731]: I1125 16:12:11.990246 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj_26eed3e4-1783-4718-b1a2-39ba37a78a12/util/0.log" Nov 25 16:12:12 crc kubenswrapper[4731]: I1125 16:12:12.145963 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj_26eed3e4-1783-4718-b1a2-39ba37a78a12/pull/0.log" Nov 25 16:12:12 crc kubenswrapper[4731]: I1125 16:12:12.167489 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj_26eed3e4-1783-4718-b1a2-39ba37a78a12/extract/0.log" Nov 25 16:12:12 crc kubenswrapper[4731]: I1125 16:12:12.208725 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210gxqdj_26eed3e4-1783-4718-b1a2-39ba37a78a12/util/0.log" Nov 25 16:12:12 crc kubenswrapper[4731]: I1125 16:12:12.343004 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8v6dv_ffaf0b19-6df2-4531-82ab-e6506f7b64c0/extract-utilities/0.log" Nov 25 16:12:12 crc kubenswrapper[4731]: I1125 16:12:12.535655 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8v6dv_ffaf0b19-6df2-4531-82ab-e6506f7b64c0/extract-utilities/0.log" Nov 25 16:12:12 crc kubenswrapper[4731]: I1125 16:12:12.538075 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8v6dv_ffaf0b19-6df2-4531-82ab-e6506f7b64c0/extract-content/0.log" Nov 25 16:12:12 crc kubenswrapper[4731]: I1125 16:12:12.548300 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8v6dv" podUID="ffaf0b19-6df2-4531-82ab-e6506f7b64c0" containerName="registry-server" containerID="cri-o://2aae19a78f82f148db8275c2814219844b8417295cbaa858734c28b9f6f9291f" gracePeriod=2 Nov 25 16:12:12 crc kubenswrapper[4731]: I1125 16:12:12.549628 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8v6dv_ffaf0b19-6df2-4531-82ab-e6506f7b64c0/extract-content/0.log" Nov 25 16:12:12 crc kubenswrapper[4731]: I1125 16:12:12.714285 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8v6dv_ffaf0b19-6df2-4531-82ab-e6506f7b64c0/extract-utilities/0.log" Nov 25 16:12:12 crc kubenswrapper[4731]: I1125 16:12:12.736921 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8v6dv_ffaf0b19-6df2-4531-82ab-e6506f7b64c0/extract-content/0.log" Nov 25 16:12:12 crc kubenswrapper[4731]: I1125 16:12:12.738700 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8v6dv_ffaf0b19-6df2-4531-82ab-e6506f7b64c0/registry-server/0.log" Nov 25 16:12:12 crc kubenswrapper[4731]: I1125 16:12:12.886635 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b5pqt_87a958bc-c1ba-4b0b-903d-b21a84408594/extract-utilities/0.log" Nov 25 16:12:13 crc kubenswrapper[4731]: I1125 16:12:13.051488 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b5pqt_87a958bc-c1ba-4b0b-903d-b21a84408594/extract-content/0.log" Nov 25 16:12:13 crc kubenswrapper[4731]: I1125 16:12:13.063013 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b5pqt_87a958bc-c1ba-4b0b-903d-b21a84408594/extract-content/0.log" Nov 25 16:12:13 crc kubenswrapper[4731]: I1125 16:12:13.078431 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b5pqt_87a958bc-c1ba-4b0b-903d-b21a84408594/extract-utilities/0.log" Nov 25 16:12:13 crc kubenswrapper[4731]: I1125 16:12:13.231556 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b5pqt_87a958bc-c1ba-4b0b-903d-b21a84408594/extract-content/0.log" Nov 25 16:12:13 crc kubenswrapper[4731]: I1125 16:12:13.241952 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b5pqt_87a958bc-c1ba-4b0b-903d-b21a84408594/extract-utilities/0.log" Nov 25 16:12:13 crc kubenswrapper[4731]: I1125 16:12:13.515491 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q84gz_01a573d4-5c1e-4ea1-a212-96afe26a6538/extract-utilities/0.log" Nov 25 16:12:13 crc kubenswrapper[4731]: I1125 16:12:13.565053 4731 generic.go:334] "Generic (PLEG): container finished" podID="ffaf0b19-6df2-4531-82ab-e6506f7b64c0" containerID="2aae19a78f82f148db8275c2814219844b8417295cbaa858734c28b9f6f9291f" exitCode=0 Nov 25 16:12:13 crc kubenswrapper[4731]: I1125 16:12:13.565112 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8v6dv" event={"ID":"ffaf0b19-6df2-4531-82ab-e6506f7b64c0","Type":"ContainerDied","Data":"2aae19a78f82f148db8275c2814219844b8417295cbaa858734c28b9f6f9291f"} Nov 25 16:12:13 crc kubenswrapper[4731]: I1125 16:12:13.883019 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:12:13 crc kubenswrapper[4731]: I1125 16:12:13.923648 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-b5pqt_87a958bc-c1ba-4b0b-903d-b21a84408594/registry-server/0.log" Nov 25 16:12:13 crc kubenswrapper[4731]: I1125 16:12:13.995199 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q84gz_01a573d4-5c1e-4ea1-a212-96afe26a6538/extract-content/0.log" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.021009 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngs44\" (UniqueName: \"kubernetes.io/projected/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-kube-api-access-ngs44\") pod \"ffaf0b19-6df2-4531-82ab-e6506f7b64c0\" (UID: \"ffaf0b19-6df2-4531-82ab-e6506f7b64c0\") " Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.021245 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-utilities\") pod \"ffaf0b19-6df2-4531-82ab-e6506f7b64c0\" (UID: \"ffaf0b19-6df2-4531-82ab-e6506f7b64c0\") " Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.021480 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-catalog-content\") pod \"ffaf0b19-6df2-4531-82ab-e6506f7b64c0\" (UID: \"ffaf0b19-6df2-4531-82ab-e6506f7b64c0\") " Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.022223 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-utilities" (OuterVolumeSpecName: "utilities") pod "ffaf0b19-6df2-4531-82ab-e6506f7b64c0" (UID: "ffaf0b19-6df2-4531-82ab-e6506f7b64c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.030104 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-kube-api-access-ngs44" (OuterVolumeSpecName: "kube-api-access-ngs44") pod "ffaf0b19-6df2-4531-82ab-e6506f7b64c0" (UID: "ffaf0b19-6df2-4531-82ab-e6506f7b64c0"). InnerVolumeSpecName "kube-api-access-ngs44". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.030431 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q84gz_01a573d4-5c1e-4ea1-a212-96afe26a6538/extract-utilities/0.log" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.030863 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q84gz_01a573d4-5c1e-4ea1-a212-96afe26a6538/extract-content/0.log" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.080283 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ffaf0b19-6df2-4531-82ab-e6506f7b64c0" (UID: "ffaf0b19-6df2-4531-82ab-e6506f7b64c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.124089 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.124449 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.124538 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngs44\" (UniqueName: \"kubernetes.io/projected/ffaf0b19-6df2-4531-82ab-e6506f7b64c0-kube-api-access-ngs44\") on node \"crc\" DevicePath \"\"" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.232621 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q84gz_01a573d4-5c1e-4ea1-a212-96afe26a6538/extract-utilities/0.log" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.337631 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q84gz_01a573d4-5c1e-4ea1-a212-96afe26a6538/extract-content/0.log" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.482418 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l_97357c05-65a9-4732-b01e-651eb43fe514/util/0.log" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.589578 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8v6dv" event={"ID":"ffaf0b19-6df2-4531-82ab-e6506f7b64c0","Type":"ContainerDied","Data":"cc16693b11abe6930151fe14a3c9d6b82d855389cdd8e5f384d3d5ce25940d76"} Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.589637 4731 scope.go:117] "RemoveContainer" containerID="2aae19a78f82f148db8275c2814219844b8417295cbaa858734c28b9f6f9291f" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.589842 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8v6dv" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.619523 4731 scope.go:117] "RemoveContainer" containerID="69693df01d8c1826e3f60ae3ce57c2b64de63a032baafeadb80eac17e747ec49" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.678510 4731 scope.go:117] "RemoveContainer" containerID="afca5b7d0798f41b0334c8210a868143cbbf5af242c60a33d19d62ac0e1a42b8" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.686673 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8v6dv"] Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.730019 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8v6dv"] Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.766151 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-q84gz_01a573d4-5c1e-4ea1-a212-96afe26a6538/registry-server/0.log" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.812798 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l_97357c05-65a9-4732-b01e-651eb43fe514/util/0.log" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.827616 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffaf0b19-6df2-4531-82ab-e6506f7b64c0" path="/var/lib/kubelet/pods/ffaf0b19-6df2-4531-82ab-e6506f7b64c0/volumes" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.848381 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l_97357c05-65a9-4732-b01e-651eb43fe514/pull/0.log" Nov 25 16:12:14 crc kubenswrapper[4731]: I1125 16:12:14.879717 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l_97357c05-65a9-4732-b01e-651eb43fe514/pull/0.log" Nov 25 16:12:15 crc kubenswrapper[4731]: I1125 16:12:15.039109 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l_97357c05-65a9-4732-b01e-651eb43fe514/util/0.log" Nov 25 16:12:15 crc kubenswrapper[4731]: I1125 16:12:15.131571 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l_97357c05-65a9-4732-b01e-651eb43fe514/extract/0.log" Nov 25 16:12:15 crc kubenswrapper[4731]: I1125 16:12:15.133110 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_e8527aae5664f20f24bf3bbb3fd2981ba838928a8a47ce599ee258e4c662d8l_97357c05-65a9-4732-b01e-651eb43fe514/pull/0.log" Nov 25 16:12:15 crc kubenswrapper[4731]: I1125 16:12:15.212858 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-vk7l9_2807ad74-1d0f-435b-bfb1-3b5eb4dca20d/marketplace-operator/0.log" Nov 25 16:12:15 crc kubenswrapper[4731]: I1125 16:12:15.294023 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5nlwp_bde46057-80c9-49a3-9bab-18d8fea09dc9/extract-utilities/0.log" Nov 25 16:12:15 crc kubenswrapper[4731]: I1125 16:12:15.978985 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5nlwp_bde46057-80c9-49a3-9bab-18d8fea09dc9/extract-utilities/0.log" Nov 25 16:12:16 crc kubenswrapper[4731]: I1125 16:12:16.120342 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5nlwp_bde46057-80c9-49a3-9bab-18d8fea09dc9/extract-content/0.log" Nov 25 16:12:16 crc kubenswrapper[4731]: I1125 16:12:16.135555 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5nlwp_bde46057-80c9-49a3-9bab-18d8fea09dc9/extract-content/0.log" Nov 25 16:12:16 crc kubenswrapper[4731]: I1125 16:12:16.288008 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5nlwp_bde46057-80c9-49a3-9bab-18d8fea09dc9/extract-content/0.log" Nov 25 16:12:16 crc kubenswrapper[4731]: I1125 16:12:16.318573 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5nlwp_bde46057-80c9-49a3-9bab-18d8fea09dc9/extract-utilities/0.log" Nov 25 16:12:16 crc kubenswrapper[4731]: I1125 16:12:16.371968 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qzqp4_7ac8f507-158c-4c89-9c07-f64a08a42283/extract-utilities/0.log" Nov 25 16:12:16 crc kubenswrapper[4731]: I1125 16:12:16.507893 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5nlwp_bde46057-80c9-49a3-9bab-18d8fea09dc9/registry-server/0.log" Nov 25 16:12:16 crc kubenswrapper[4731]: I1125 16:12:16.590729 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qzqp4_7ac8f507-158c-4c89-9c07-f64a08a42283/extract-content/0.log" Nov 25 16:12:16 crc kubenswrapper[4731]: I1125 16:12:16.643319 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qzqp4_7ac8f507-158c-4c89-9c07-f64a08a42283/extract-content/0.log" Nov 25 16:12:16 crc kubenswrapper[4731]: I1125 16:12:16.650595 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qzqp4_7ac8f507-158c-4c89-9c07-f64a08a42283/extract-utilities/0.log" Nov 25 16:12:16 crc kubenswrapper[4731]: I1125 16:12:16.801723 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qzqp4_7ac8f507-158c-4c89-9c07-f64a08a42283/extract-utilities/0.log" Nov 25 16:12:16 crc kubenswrapper[4731]: I1125 16:12:16.867675 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qzqp4_7ac8f507-158c-4c89-9c07-f64a08a42283/extract-content/0.log" Nov 25 16:12:17 crc kubenswrapper[4731]: I1125 16:12:17.499267 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-qzqp4_7ac8f507-158c-4c89-9c07-f64a08a42283/registry-server/0.log" Nov 25 16:12:28 crc kubenswrapper[4731]: I1125 16:12:28.366796 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-2vtlt_cbf9673f-5c6b-41d7-8eba-cdc0414a3910/prometheus-operator/0.log" Nov 25 16:12:28 crc kubenswrapper[4731]: I1125 16:12:28.489546 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-69fb756767-6x8xb_236a458e-0795-42ad-8dab-84407618f96d/prometheus-operator-admission-webhook/0.log" Nov 25 16:12:28 crc kubenswrapper[4731]: I1125 16:12:28.660479 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-69fb756767-wdpx7_900460b9-59fd-4456-925b-ed34b683ac09/prometheus-operator-admission-webhook/0.log" Nov 25 16:12:28 crc kubenswrapper[4731]: I1125 16:12:28.779890 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-9s4pj_60ec713e-70e1-4fbf-b011-b6e4bcf87539/operator/0.log" Nov 25 16:12:28 crc kubenswrapper[4731]: I1125 16:12:28.850652 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-qz72z_bfcf5cee-d085-4fb4-8144-9a38143246b4/perses-operator/0.log" Nov 25 16:14:10 crc kubenswrapper[4731]: I1125 16:14:10.703492 4731 generic.go:334] "Generic (PLEG): container finished" podID="9c0853b3-8070-4067-8913-b8064b427cf5" containerID="4fd01b81094a0487041d6c715b18254dc2c79d0769f7d2e8def46627910b3163" exitCode=0 Nov 25 16:14:10 crc kubenswrapper[4731]: I1125 16:14:10.703565 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-56bpd/must-gather-t54tm" event={"ID":"9c0853b3-8070-4067-8913-b8064b427cf5","Type":"ContainerDied","Data":"4fd01b81094a0487041d6c715b18254dc2c79d0769f7d2e8def46627910b3163"} Nov 25 16:14:10 crc kubenswrapper[4731]: I1125 16:14:10.704775 4731 scope.go:117] "RemoveContainer" containerID="4fd01b81094a0487041d6c715b18254dc2c79d0769f7d2e8def46627910b3163" Nov 25 16:14:10 crc kubenswrapper[4731]: I1125 16:14:10.936196 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-56bpd_must-gather-t54tm_9c0853b3-8070-4067-8913-b8064b427cf5/gather/0.log" Nov 25 16:14:20 crc kubenswrapper[4731]: I1125 16:14:20.040213 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 16:14:20 crc kubenswrapper[4731]: I1125 16:14:20.040849 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.232516 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-56bpd/must-gather-t54tm"] Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.233161 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-56bpd/must-gather-t54tm" podUID="9c0853b3-8070-4067-8913-b8064b427cf5" containerName="copy" containerID="cri-o://844a35b3ad2432300fdfb6625a44aff6ca2ebe0d22fb88738de44a06d19c3c1a" gracePeriod=2 Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.244174 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-56bpd/must-gather-t54tm"] Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.745847 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-56bpd_must-gather-t54tm_9c0853b3-8070-4067-8913-b8064b427cf5/copy/0.log" Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.747040 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-56bpd/must-gather-t54tm" Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.821980 4731 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-56bpd_must-gather-t54tm_9c0853b3-8070-4067-8913-b8064b427cf5/copy/0.log" Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.822522 4731 generic.go:334] "Generic (PLEG): container finished" podID="9c0853b3-8070-4067-8913-b8064b427cf5" containerID="844a35b3ad2432300fdfb6625a44aff6ca2ebe0d22fb88738de44a06d19c3c1a" exitCode=143 Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.822619 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-56bpd/must-gather-t54tm" Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.830375 4731 scope.go:117] "RemoveContainer" containerID="844a35b3ad2432300fdfb6625a44aff6ca2ebe0d22fb88738de44a06d19c3c1a" Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.851277 4731 scope.go:117] "RemoveContainer" containerID="4fd01b81094a0487041d6c715b18254dc2c79d0769f7d2e8def46627910b3163" Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.919235 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t74lm\" (UniqueName: \"kubernetes.io/projected/9c0853b3-8070-4067-8913-b8064b427cf5-kube-api-access-t74lm\") pod \"9c0853b3-8070-4067-8913-b8064b427cf5\" (UID: \"9c0853b3-8070-4067-8913-b8064b427cf5\") " Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.919512 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9c0853b3-8070-4067-8913-b8064b427cf5-must-gather-output\") pod \"9c0853b3-8070-4067-8913-b8064b427cf5\" (UID: \"9c0853b3-8070-4067-8913-b8064b427cf5\") " Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.928472 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c0853b3-8070-4067-8913-b8064b427cf5-kube-api-access-t74lm" (OuterVolumeSpecName: "kube-api-access-t74lm") pod "9c0853b3-8070-4067-8913-b8064b427cf5" (UID: "9c0853b3-8070-4067-8913-b8064b427cf5"). InnerVolumeSpecName "kube-api-access-t74lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.961405 4731 scope.go:117] "RemoveContainer" containerID="844a35b3ad2432300fdfb6625a44aff6ca2ebe0d22fb88738de44a06d19c3c1a" Nov 25 16:14:22 crc kubenswrapper[4731]: E1125 16:14:22.964298 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"844a35b3ad2432300fdfb6625a44aff6ca2ebe0d22fb88738de44a06d19c3c1a\": container with ID starting with 844a35b3ad2432300fdfb6625a44aff6ca2ebe0d22fb88738de44a06d19c3c1a not found: ID does not exist" containerID="844a35b3ad2432300fdfb6625a44aff6ca2ebe0d22fb88738de44a06d19c3c1a" Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.964334 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"844a35b3ad2432300fdfb6625a44aff6ca2ebe0d22fb88738de44a06d19c3c1a"} err="failed to get container status \"844a35b3ad2432300fdfb6625a44aff6ca2ebe0d22fb88738de44a06d19c3c1a\": rpc error: code = NotFound desc = could not find container \"844a35b3ad2432300fdfb6625a44aff6ca2ebe0d22fb88738de44a06d19c3c1a\": container with ID starting with 844a35b3ad2432300fdfb6625a44aff6ca2ebe0d22fb88738de44a06d19c3c1a not found: ID does not exist" Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.964359 4731 scope.go:117] "RemoveContainer" containerID="4fd01b81094a0487041d6c715b18254dc2c79d0769f7d2e8def46627910b3163" Nov 25 16:14:22 crc kubenswrapper[4731]: E1125 16:14:22.965181 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fd01b81094a0487041d6c715b18254dc2c79d0769f7d2e8def46627910b3163\": container with ID starting with 4fd01b81094a0487041d6c715b18254dc2c79d0769f7d2e8def46627910b3163 not found: ID does not exist" containerID="4fd01b81094a0487041d6c715b18254dc2c79d0769f7d2e8def46627910b3163" Nov 25 16:14:22 crc kubenswrapper[4731]: I1125 16:14:22.965247 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fd01b81094a0487041d6c715b18254dc2c79d0769f7d2e8def46627910b3163"} err="failed to get container status \"4fd01b81094a0487041d6c715b18254dc2c79d0769f7d2e8def46627910b3163\": rpc error: code = NotFound desc = could not find container \"4fd01b81094a0487041d6c715b18254dc2c79d0769f7d2e8def46627910b3163\": container with ID starting with 4fd01b81094a0487041d6c715b18254dc2c79d0769f7d2e8def46627910b3163 not found: ID does not exist" Nov 25 16:14:23 crc kubenswrapper[4731]: I1125 16:14:23.022621 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t74lm\" (UniqueName: \"kubernetes.io/projected/9c0853b3-8070-4067-8913-b8064b427cf5-kube-api-access-t74lm\") on node \"crc\" DevicePath \"\"" Nov 25 16:14:23 crc kubenswrapper[4731]: I1125 16:14:23.089940 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c0853b3-8070-4067-8913-b8064b427cf5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "9c0853b3-8070-4067-8913-b8064b427cf5" (UID: "9c0853b3-8070-4067-8913-b8064b427cf5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:14:23 crc kubenswrapper[4731]: I1125 16:14:23.123993 4731 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/9c0853b3-8070-4067-8913-b8064b427cf5-must-gather-output\") on node \"crc\" DevicePath \"\"" Nov 25 16:14:24 crc kubenswrapper[4731]: I1125 16:14:24.825104 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c0853b3-8070-4067-8913-b8064b427cf5" path="/var/lib/kubelet/pods/9c0853b3-8070-4067-8913-b8064b427cf5/volumes" Nov 25 16:14:50 crc kubenswrapper[4731]: I1125 16:14:50.040332 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 16:14:50 crc kubenswrapper[4731]: I1125 16:14:50.040946 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.159098 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn"] Nov 25 16:15:00 crc kubenswrapper[4731]: E1125 16:15:00.160345 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88" containerName="registry-server" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.160366 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88" containerName="registry-server" Nov 25 16:15:00 crc kubenswrapper[4731]: E1125 16:15:00.160382 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88" containerName="extract-utilities" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.160389 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88" containerName="extract-utilities" Nov 25 16:15:00 crc kubenswrapper[4731]: E1125 16:15:00.160421 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffaf0b19-6df2-4531-82ab-e6506f7b64c0" containerName="registry-server" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.160432 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffaf0b19-6df2-4531-82ab-e6506f7b64c0" containerName="registry-server" Nov 25 16:15:00 crc kubenswrapper[4731]: E1125 16:15:00.160447 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c0853b3-8070-4067-8913-b8064b427cf5" containerName="copy" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.160456 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c0853b3-8070-4067-8913-b8064b427cf5" containerName="copy" Nov 25 16:15:00 crc kubenswrapper[4731]: E1125 16:15:00.160475 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffaf0b19-6df2-4531-82ab-e6506f7b64c0" containerName="extract-utilities" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.160484 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffaf0b19-6df2-4531-82ab-e6506f7b64c0" containerName="extract-utilities" Nov 25 16:15:00 crc kubenswrapper[4731]: E1125 16:15:00.160496 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c0853b3-8070-4067-8913-b8064b427cf5" containerName="gather" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.160503 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c0853b3-8070-4067-8913-b8064b427cf5" containerName="gather" Nov 25 16:15:00 crc kubenswrapper[4731]: E1125 16:15:00.160518 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffaf0b19-6df2-4531-82ab-e6506f7b64c0" containerName="extract-content" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.160525 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffaf0b19-6df2-4531-82ab-e6506f7b64c0" containerName="extract-content" Nov 25 16:15:00 crc kubenswrapper[4731]: E1125 16:15:00.160551 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88" containerName="extract-content" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.160558 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88" containerName="extract-content" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.160803 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c0853b3-8070-4067-8913-b8064b427cf5" containerName="gather" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.160860 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c0853b3-8070-4067-8913-b8064b427cf5" containerName="copy" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.160870 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff5ef89b-6ac7-4c40-a7d7-d92aedecdf88" containerName="registry-server" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.160881 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffaf0b19-6df2-4531-82ab-e6506f7b64c0" containerName="registry-server" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.161965 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.165856 4731 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.166071 4731 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.173727 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn"] Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.313556 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-secret-volume\") pod \"collect-profiles-29401455-6s4vn\" (UID: \"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.314106 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfrwk\" (UniqueName: \"kubernetes.io/projected/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-kube-api-access-lfrwk\") pod \"collect-profiles-29401455-6s4vn\" (UID: \"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.314459 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-config-volume\") pod \"collect-profiles-29401455-6s4vn\" (UID: \"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.415898 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfrwk\" (UniqueName: \"kubernetes.io/projected/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-kube-api-access-lfrwk\") pod \"collect-profiles-29401455-6s4vn\" (UID: \"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.416083 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-config-volume\") pod \"collect-profiles-29401455-6s4vn\" (UID: \"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.416167 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-secret-volume\") pod \"collect-profiles-29401455-6s4vn\" (UID: \"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.417468 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-config-volume\") pod \"collect-profiles-29401455-6s4vn\" (UID: \"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.422399 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-secret-volume\") pod \"collect-profiles-29401455-6s4vn\" (UID: \"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.432937 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfrwk\" (UniqueName: \"kubernetes.io/projected/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-kube-api-access-lfrwk\") pod \"collect-profiles-29401455-6s4vn\" (UID: \"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.502367 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" Nov 25 16:15:00 crc kubenswrapper[4731]: I1125 16:15:00.980687 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn"] Nov 25 16:15:01 crc kubenswrapper[4731]: I1125 16:15:01.219185 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" event={"ID":"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf","Type":"ContainerStarted","Data":"9424cc3e0e3f67a8c0b930e9a350bb15fbe2d116599780d2462f54cfcc65c27e"} Nov 25 16:15:01 crc kubenswrapper[4731]: I1125 16:15:01.219236 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" event={"ID":"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf","Type":"ContainerStarted","Data":"d92c01e4874f105906d63582181b7a802e81dcd07f9a6323b7eceaf666c77be8"} Nov 25 16:15:01 crc kubenswrapper[4731]: I1125 16:15:01.244323 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" podStartSLOduration=1.244298889 podStartE2EDuration="1.244298889s" podCreationTimestamp="2025-11-25 16:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-25 16:15:01.232869754 +0000 UTC m=+4859.980004214" watchObservedRunningTime="2025-11-25 16:15:01.244298889 +0000 UTC m=+4859.991433319" Nov 25 16:15:02 crc kubenswrapper[4731]: I1125 16:15:02.234717 4731 generic.go:334] "Generic (PLEG): container finished" podID="c464dd56-4a2a-4a75-ac65-8353a9c1bbbf" containerID="9424cc3e0e3f67a8c0b930e9a350bb15fbe2d116599780d2462f54cfcc65c27e" exitCode=0 Nov 25 16:15:02 crc kubenswrapper[4731]: I1125 16:15:02.234877 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" event={"ID":"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf","Type":"ContainerDied","Data":"9424cc3e0e3f67a8c0b930e9a350bb15fbe2d116599780d2462f54cfcc65c27e"} Nov 25 16:15:03 crc kubenswrapper[4731]: I1125 16:15:03.609529 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" Nov 25 16:15:03 crc kubenswrapper[4731]: I1125 16:15:03.786480 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-config-volume\") pod \"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf\" (UID: \"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf\") " Nov 25 16:15:03 crc kubenswrapper[4731]: I1125 16:15:03.786590 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfrwk\" (UniqueName: \"kubernetes.io/projected/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-kube-api-access-lfrwk\") pod \"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf\" (UID: \"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf\") " Nov 25 16:15:03 crc kubenswrapper[4731]: I1125 16:15:03.786702 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-secret-volume\") pod \"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf\" (UID: \"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf\") " Nov 25 16:15:03 crc kubenswrapper[4731]: I1125 16:15:03.787381 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-config-volume" (OuterVolumeSpecName: "config-volume") pod "c464dd56-4a2a-4a75-ac65-8353a9c1bbbf" (UID: "c464dd56-4a2a-4a75-ac65-8353a9c1bbbf"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Nov 25 16:15:03 crc kubenswrapper[4731]: I1125 16:15:03.796403 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-kube-api-access-lfrwk" (OuterVolumeSpecName: "kube-api-access-lfrwk") pod "c464dd56-4a2a-4a75-ac65-8353a9c1bbbf" (UID: "c464dd56-4a2a-4a75-ac65-8353a9c1bbbf"). InnerVolumeSpecName "kube-api-access-lfrwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:15:03 crc kubenswrapper[4731]: I1125 16:15:03.800206 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c464dd56-4a2a-4a75-ac65-8353a9c1bbbf" (UID: "c464dd56-4a2a-4a75-ac65-8353a9c1bbbf"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Nov 25 16:15:03 crc kubenswrapper[4731]: I1125 16:15:03.889419 4731 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-secret-volume\") on node \"crc\" DevicePath \"\"" Nov 25 16:15:03 crc kubenswrapper[4731]: I1125 16:15:03.889477 4731 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-config-volume\") on node \"crc\" DevicePath \"\"" Nov 25 16:15:03 crc kubenswrapper[4731]: I1125 16:15:03.889492 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfrwk\" (UniqueName: \"kubernetes.io/projected/c464dd56-4a2a-4a75-ac65-8353a9c1bbbf-kube-api-access-lfrwk\") on node \"crc\" DevicePath \"\"" Nov 25 16:15:04 crc kubenswrapper[4731]: I1125 16:15:04.256961 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" event={"ID":"c464dd56-4a2a-4a75-ac65-8353a9c1bbbf","Type":"ContainerDied","Data":"d92c01e4874f105906d63582181b7a802e81dcd07f9a6323b7eceaf666c77be8"} Nov 25 16:15:04 crc kubenswrapper[4731]: I1125 16:15:04.257092 4731 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d92c01e4874f105906d63582181b7a802e81dcd07f9a6323b7eceaf666c77be8" Nov 25 16:15:04 crc kubenswrapper[4731]: I1125 16:15:04.257048 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29401455-6s4vn" Nov 25 16:15:04 crc kubenswrapper[4731]: I1125 16:15:04.327294 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2"] Nov 25 16:15:04 crc kubenswrapper[4731]: I1125 16:15:04.338338 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29401410-w9vw2"] Nov 25 16:15:04 crc kubenswrapper[4731]: I1125 16:15:04.826074 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5874b76-c567-4ef3-ad1c-03ba10a554a4" path="/var/lib/kubelet/pods/f5874b76-c567-4ef3-ad1c-03ba10a554a4/volumes" Nov 25 16:15:18 crc kubenswrapper[4731]: I1125 16:15:18.030971 4731 scope.go:117] "RemoveContainer" containerID="44943f513f17e84f388f4d958ea5674a57704787285db3c478582df9e1619df8" Nov 25 16:15:18 crc kubenswrapper[4731]: I1125 16:15:18.055588 4731 scope.go:117] "RemoveContainer" containerID="127695bac3f905a91605f29e4da058855025b1b9f76bb69349465ad7b18abe6e" Nov 25 16:15:20 crc kubenswrapper[4731]: I1125 16:15:20.040721 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 16:15:20 crc kubenswrapper[4731]: I1125 16:15:20.041130 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Nov 25 16:15:20 crc kubenswrapper[4731]: I1125 16:15:20.041191 4731 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" Nov 25 16:15:20 crc kubenswrapper[4731]: I1125 16:15:20.042074 4731 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"24a12702147a9e5f9ee7a48477219fc53372f24d1299b9929cc38a3c6fbe6044"} pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Nov 25 16:15:20 crc kubenswrapper[4731]: I1125 16:15:20.042131 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" containerID="cri-o://24a12702147a9e5f9ee7a48477219fc53372f24d1299b9929cc38a3c6fbe6044" gracePeriod=600 Nov 25 16:15:20 crc kubenswrapper[4731]: I1125 16:15:20.670007 4731 generic.go:334] "Generic (PLEG): container finished" podID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerID="24a12702147a9e5f9ee7a48477219fc53372f24d1299b9929cc38a3c6fbe6044" exitCode=0 Nov 25 16:15:20 crc kubenswrapper[4731]: I1125 16:15:20.670467 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerDied","Data":"24a12702147a9e5f9ee7a48477219fc53372f24d1299b9929cc38a3c6fbe6044"} Nov 25 16:15:20 crc kubenswrapper[4731]: I1125 16:15:20.670492 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" event={"ID":"f469eb10-0aba-4dba-a237-899e0b8dabca","Type":"ContainerStarted","Data":"f4dddac82e61b1e6eade96e51647b901667fb9151d66b96d5c78c6355efee873"} Nov 25 16:15:20 crc kubenswrapper[4731]: I1125 16:15:20.670510 4731 scope.go:117] "RemoveContainer" containerID="b11ecaf347642f05f0c704d8ee3ceb9e0f247df4ef3b8335b3f6b174dc0f8b0c" Nov 25 16:16:51 crc kubenswrapper[4731]: I1125 16:16:51.301120 4731 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mng8z"] Nov 25 16:16:51 crc kubenswrapper[4731]: E1125 16:16:51.302244 4731 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c464dd56-4a2a-4a75-ac65-8353a9c1bbbf" containerName="collect-profiles" Nov 25 16:16:51 crc kubenswrapper[4731]: I1125 16:16:51.302264 4731 state_mem.go:107] "Deleted CPUSet assignment" podUID="c464dd56-4a2a-4a75-ac65-8353a9c1bbbf" containerName="collect-profiles" Nov 25 16:16:51 crc kubenswrapper[4731]: I1125 16:16:51.302468 4731 memory_manager.go:354] "RemoveStaleState removing state" podUID="c464dd56-4a2a-4a75-ac65-8353a9c1bbbf" containerName="collect-profiles" Nov 25 16:16:51 crc kubenswrapper[4731]: I1125 16:16:51.304162 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:16:51 crc kubenswrapper[4731]: I1125 16:16:51.317760 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mng8z"] Nov 25 16:16:51 crc kubenswrapper[4731]: I1125 16:16:51.436057 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3416e55-abfd-4e6f-9a4c-03d196ea6887-utilities\") pod \"redhat-marketplace-mng8z\" (UID: \"f3416e55-abfd-4e6f-9a4c-03d196ea6887\") " pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:16:51 crc kubenswrapper[4731]: I1125 16:16:51.436128 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3416e55-abfd-4e6f-9a4c-03d196ea6887-catalog-content\") pod \"redhat-marketplace-mng8z\" (UID: \"f3416e55-abfd-4e6f-9a4c-03d196ea6887\") " pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:16:51 crc kubenswrapper[4731]: I1125 16:16:51.436345 4731 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmf9n\" (UniqueName: \"kubernetes.io/projected/f3416e55-abfd-4e6f-9a4c-03d196ea6887-kube-api-access-hmf9n\") pod \"redhat-marketplace-mng8z\" (UID: \"f3416e55-abfd-4e6f-9a4c-03d196ea6887\") " pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:16:51 crc kubenswrapper[4731]: I1125 16:16:51.538498 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmf9n\" (UniqueName: \"kubernetes.io/projected/f3416e55-abfd-4e6f-9a4c-03d196ea6887-kube-api-access-hmf9n\") pod \"redhat-marketplace-mng8z\" (UID: \"f3416e55-abfd-4e6f-9a4c-03d196ea6887\") " pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:16:51 crc kubenswrapper[4731]: I1125 16:16:51.538676 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3416e55-abfd-4e6f-9a4c-03d196ea6887-utilities\") pod \"redhat-marketplace-mng8z\" (UID: \"f3416e55-abfd-4e6f-9a4c-03d196ea6887\") " pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:16:51 crc kubenswrapper[4731]: I1125 16:16:51.538730 4731 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3416e55-abfd-4e6f-9a4c-03d196ea6887-catalog-content\") pod \"redhat-marketplace-mng8z\" (UID: \"f3416e55-abfd-4e6f-9a4c-03d196ea6887\") " pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:16:51 crc kubenswrapper[4731]: I1125 16:16:51.539329 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3416e55-abfd-4e6f-9a4c-03d196ea6887-catalog-content\") pod \"redhat-marketplace-mng8z\" (UID: \"f3416e55-abfd-4e6f-9a4c-03d196ea6887\") " pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:16:51 crc kubenswrapper[4731]: I1125 16:16:51.539459 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3416e55-abfd-4e6f-9a4c-03d196ea6887-utilities\") pod \"redhat-marketplace-mng8z\" (UID: \"f3416e55-abfd-4e6f-9a4c-03d196ea6887\") " pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:16:51 crc kubenswrapper[4731]: I1125 16:16:51.563942 4731 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmf9n\" (UniqueName: \"kubernetes.io/projected/f3416e55-abfd-4e6f-9a4c-03d196ea6887-kube-api-access-hmf9n\") pod \"redhat-marketplace-mng8z\" (UID: \"f3416e55-abfd-4e6f-9a4c-03d196ea6887\") " pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:16:51 crc kubenswrapper[4731]: I1125 16:16:51.632519 4731 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:16:52 crc kubenswrapper[4731]: I1125 16:16:52.175665 4731 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mng8z"] Nov 25 16:16:52 crc kubenswrapper[4731]: I1125 16:16:52.586654 4731 generic.go:334] "Generic (PLEG): container finished" podID="f3416e55-abfd-4e6f-9a4c-03d196ea6887" containerID="4338fc5c0cac429475afde7873273521408e6341b99a0c83bafbd32142eb127c" exitCode=0 Nov 25 16:16:52 crc kubenswrapper[4731]: I1125 16:16:52.586693 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mng8z" event={"ID":"f3416e55-abfd-4e6f-9a4c-03d196ea6887","Type":"ContainerDied","Data":"4338fc5c0cac429475afde7873273521408e6341b99a0c83bafbd32142eb127c"} Nov 25 16:16:52 crc kubenswrapper[4731]: I1125 16:16:52.586717 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mng8z" event={"ID":"f3416e55-abfd-4e6f-9a4c-03d196ea6887","Type":"ContainerStarted","Data":"c147da8b0e380c51af94a2f70f1afee7ce36bee9857e5023846848554a89232c"} Nov 25 16:16:52 crc kubenswrapper[4731]: I1125 16:16:52.589101 4731 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Nov 25 16:16:54 crc kubenswrapper[4731]: I1125 16:16:54.612217 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mng8z" event={"ID":"f3416e55-abfd-4e6f-9a4c-03d196ea6887","Type":"ContainerStarted","Data":"66d1617490eee55cfabc87c932404a2ec7a4de1f9c0193179d229e740ef5d52e"} Nov 25 16:16:55 crc kubenswrapper[4731]: I1125 16:16:55.624168 4731 generic.go:334] "Generic (PLEG): container finished" podID="f3416e55-abfd-4e6f-9a4c-03d196ea6887" containerID="66d1617490eee55cfabc87c932404a2ec7a4de1f9c0193179d229e740ef5d52e" exitCode=0 Nov 25 16:16:55 crc kubenswrapper[4731]: I1125 16:16:55.624430 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mng8z" event={"ID":"f3416e55-abfd-4e6f-9a4c-03d196ea6887","Type":"ContainerDied","Data":"66d1617490eee55cfabc87c932404a2ec7a4de1f9c0193179d229e740ef5d52e"} Nov 25 16:16:56 crc kubenswrapper[4731]: I1125 16:16:56.634785 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mng8z" event={"ID":"f3416e55-abfd-4e6f-9a4c-03d196ea6887","Type":"ContainerStarted","Data":"77876f9d54a63d68c3056f2fabab6bf4ced85a29d4c8ec25d58fbb4e59600d3c"} Nov 25 16:16:56 crc kubenswrapper[4731]: I1125 16:16:56.652718 4731 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mng8z" podStartSLOduration=2.186897842 podStartE2EDuration="5.652700571s" podCreationTimestamp="2025-11-25 16:16:51 +0000 UTC" firstStartedPulling="2025-11-25 16:16:52.58879859 +0000 UTC m=+4971.335933020" lastFinishedPulling="2025-11-25 16:16:56.054601319 +0000 UTC m=+4974.801735749" observedRunningTime="2025-11-25 16:16:56.650292621 +0000 UTC m=+4975.397427061" watchObservedRunningTime="2025-11-25 16:16:56.652700571 +0000 UTC m=+4975.399835001" Nov 25 16:17:01 crc kubenswrapper[4731]: I1125 16:17:01.632993 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:17:01 crc kubenswrapper[4731]: I1125 16:17:01.633568 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:17:02 crc kubenswrapper[4731]: I1125 16:17:02.101343 4731 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:17:02 crc kubenswrapper[4731]: I1125 16:17:02.174087 4731 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:17:02 crc kubenswrapper[4731]: I1125 16:17:02.341784 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mng8z"] Nov 25 16:17:03 crc kubenswrapper[4731]: I1125 16:17:03.705602 4731 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mng8z" podUID="f3416e55-abfd-4e6f-9a4c-03d196ea6887" containerName="registry-server" containerID="cri-o://77876f9d54a63d68c3056f2fabab6bf4ced85a29d4c8ec25d58fbb4e59600d3c" gracePeriod=2 Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.273304 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.424737 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmf9n\" (UniqueName: \"kubernetes.io/projected/f3416e55-abfd-4e6f-9a4c-03d196ea6887-kube-api-access-hmf9n\") pod \"f3416e55-abfd-4e6f-9a4c-03d196ea6887\" (UID: \"f3416e55-abfd-4e6f-9a4c-03d196ea6887\") " Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.424941 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3416e55-abfd-4e6f-9a4c-03d196ea6887-utilities\") pod \"f3416e55-abfd-4e6f-9a4c-03d196ea6887\" (UID: \"f3416e55-abfd-4e6f-9a4c-03d196ea6887\") " Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.425057 4731 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3416e55-abfd-4e6f-9a4c-03d196ea6887-catalog-content\") pod \"f3416e55-abfd-4e6f-9a4c-03d196ea6887\" (UID: \"f3416e55-abfd-4e6f-9a4c-03d196ea6887\") " Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.425668 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3416e55-abfd-4e6f-9a4c-03d196ea6887-utilities" (OuterVolumeSpecName: "utilities") pod "f3416e55-abfd-4e6f-9a4c-03d196ea6887" (UID: "f3416e55-abfd-4e6f-9a4c-03d196ea6887"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.428809 4731 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3416e55-abfd-4e6f-9a4c-03d196ea6887-utilities\") on node \"crc\" DevicePath \"\"" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.430830 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3416e55-abfd-4e6f-9a4c-03d196ea6887-kube-api-access-hmf9n" (OuterVolumeSpecName: "kube-api-access-hmf9n") pod "f3416e55-abfd-4e6f-9a4c-03d196ea6887" (UID: "f3416e55-abfd-4e6f-9a4c-03d196ea6887"). InnerVolumeSpecName "kube-api-access-hmf9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.450106 4731 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3416e55-abfd-4e6f-9a4c-03d196ea6887-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3416e55-abfd-4e6f-9a4c-03d196ea6887" (UID: "f3416e55-abfd-4e6f-9a4c-03d196ea6887"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.531012 4731 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmf9n\" (UniqueName: \"kubernetes.io/projected/f3416e55-abfd-4e6f-9a4c-03d196ea6887-kube-api-access-hmf9n\") on node \"crc\" DevicePath \"\"" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.531043 4731 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3416e55-abfd-4e6f-9a4c-03d196ea6887-catalog-content\") on node \"crc\" DevicePath \"\"" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.717031 4731 generic.go:334] "Generic (PLEG): container finished" podID="f3416e55-abfd-4e6f-9a4c-03d196ea6887" containerID="77876f9d54a63d68c3056f2fabab6bf4ced85a29d4c8ec25d58fbb4e59600d3c" exitCode=0 Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.717078 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mng8z" event={"ID":"f3416e55-abfd-4e6f-9a4c-03d196ea6887","Type":"ContainerDied","Data":"77876f9d54a63d68c3056f2fabab6bf4ced85a29d4c8ec25d58fbb4e59600d3c"} Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.717105 4731 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mng8z" event={"ID":"f3416e55-abfd-4e6f-9a4c-03d196ea6887","Type":"ContainerDied","Data":"c147da8b0e380c51af94a2f70f1afee7ce36bee9857e5023846848554a89232c"} Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.717105 4731 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mng8z" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.717126 4731 scope.go:117] "RemoveContainer" containerID="77876f9d54a63d68c3056f2fabab6bf4ced85a29d4c8ec25d58fbb4e59600d3c" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.741910 4731 scope.go:117] "RemoveContainer" containerID="66d1617490eee55cfabc87c932404a2ec7a4de1f9c0193179d229e740ef5d52e" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.761445 4731 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mng8z"] Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.773296 4731 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mng8z"] Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.783568 4731 scope.go:117] "RemoveContainer" containerID="4338fc5c0cac429475afde7873273521408e6341b99a0c83bafbd32142eb127c" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.818183 4731 scope.go:117] "RemoveContainer" containerID="77876f9d54a63d68c3056f2fabab6bf4ced85a29d4c8ec25d58fbb4e59600d3c" Nov 25 16:17:04 crc kubenswrapper[4731]: E1125 16:17:04.819123 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77876f9d54a63d68c3056f2fabab6bf4ced85a29d4c8ec25d58fbb4e59600d3c\": container with ID starting with 77876f9d54a63d68c3056f2fabab6bf4ced85a29d4c8ec25d58fbb4e59600d3c not found: ID does not exist" containerID="77876f9d54a63d68c3056f2fabab6bf4ced85a29d4c8ec25d58fbb4e59600d3c" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.819205 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77876f9d54a63d68c3056f2fabab6bf4ced85a29d4c8ec25d58fbb4e59600d3c"} err="failed to get container status \"77876f9d54a63d68c3056f2fabab6bf4ced85a29d4c8ec25d58fbb4e59600d3c\": rpc error: code = NotFound desc = could not find container \"77876f9d54a63d68c3056f2fabab6bf4ced85a29d4c8ec25d58fbb4e59600d3c\": container with ID starting with 77876f9d54a63d68c3056f2fabab6bf4ced85a29d4c8ec25d58fbb4e59600d3c not found: ID does not exist" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.819245 4731 scope.go:117] "RemoveContainer" containerID="66d1617490eee55cfabc87c932404a2ec7a4de1f9c0193179d229e740ef5d52e" Nov 25 16:17:04 crc kubenswrapper[4731]: E1125 16:17:04.819652 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66d1617490eee55cfabc87c932404a2ec7a4de1f9c0193179d229e740ef5d52e\": container with ID starting with 66d1617490eee55cfabc87c932404a2ec7a4de1f9c0193179d229e740ef5d52e not found: ID does not exist" containerID="66d1617490eee55cfabc87c932404a2ec7a4de1f9c0193179d229e740ef5d52e" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.819710 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66d1617490eee55cfabc87c932404a2ec7a4de1f9c0193179d229e740ef5d52e"} err="failed to get container status \"66d1617490eee55cfabc87c932404a2ec7a4de1f9c0193179d229e740ef5d52e\": rpc error: code = NotFound desc = could not find container \"66d1617490eee55cfabc87c932404a2ec7a4de1f9c0193179d229e740ef5d52e\": container with ID starting with 66d1617490eee55cfabc87c932404a2ec7a4de1f9c0193179d229e740ef5d52e not found: ID does not exist" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.819747 4731 scope.go:117] "RemoveContainer" containerID="4338fc5c0cac429475afde7873273521408e6341b99a0c83bafbd32142eb127c" Nov 25 16:17:04 crc kubenswrapper[4731]: E1125 16:17:04.820122 4731 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4338fc5c0cac429475afde7873273521408e6341b99a0c83bafbd32142eb127c\": container with ID starting with 4338fc5c0cac429475afde7873273521408e6341b99a0c83bafbd32142eb127c not found: ID does not exist" containerID="4338fc5c0cac429475afde7873273521408e6341b99a0c83bafbd32142eb127c" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.820158 4731 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4338fc5c0cac429475afde7873273521408e6341b99a0c83bafbd32142eb127c"} err="failed to get container status \"4338fc5c0cac429475afde7873273521408e6341b99a0c83bafbd32142eb127c\": rpc error: code = NotFound desc = could not find container \"4338fc5c0cac429475afde7873273521408e6341b99a0c83bafbd32142eb127c\": container with ID starting with 4338fc5c0cac429475afde7873273521408e6341b99a0c83bafbd32142eb127c not found: ID does not exist" Nov 25 16:17:04 crc kubenswrapper[4731]: I1125 16:17:04.830142 4731 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3416e55-abfd-4e6f-9a4c-03d196ea6887" path="/var/lib/kubelet/pods/f3416e55-abfd-4e6f-9a4c-03d196ea6887/volumes" Nov 25 16:17:20 crc kubenswrapper[4731]: I1125 16:17:20.039765 4731 patch_prober.go:28] interesting pod/machine-config-daemon-f6xlz container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Nov 25 16:17:20 crc kubenswrapper[4731]: I1125 16:17:20.040591 4731 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-f6xlz" podUID="f469eb10-0aba-4dba-a237-899e0b8dabca" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"